WorldWideScience

Sample records for model prediction efficiency

  1. AN EFFICIENT PATIENT INFLOW PREDICTION MODEL FOR HOSPITAL RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Kottalanka Srikanth

    2017-07-01

    Full Text Available There has been increasing demand in improving service provisioning in hospital resources management. Hospital industries work with strict budget constraint at the same time assures quality care. To achieve quality care with budget constraint an efficient prediction model is required. Recently there has been various time series based prediction model has been proposed to manage hospital resources such ambulance monitoring, emergency care and so on. These models are not efficient as they do not consider the nature of scenario such climate condition etc. To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. This induces overhead and affects the accuracy in prediction. To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. Experiment are conducted to evaluate the performance of proposed model inter of RMSE and MAPE. The outcome shows the proposed model reduces RMSE and MAPE over existing back propagation based artificial neural network. The overall outcomes show the proposed prediction model improves the accuracy of prediction which aid in improving the quality of health care management.

  2. Efficient predictive model-based and fuzzy control for green urban mobility

    NARCIS (Netherlands)

    Jamshidnejad, A.

    2017-01-01

    In this thesis, we develop efficient predictive model-based control approaches, including model-predictive control (MPC) andmodel-based fuzzy control, for application in urban traffic networks with the aim of reducing a combination of the total time spent by the vehicles within the network and the

  3. Real-time prediction models for output power and efficiency of grid-connected solar photovoltaic systems

    International Nuclear Information System (INIS)

    Su, Yan; Chan, Lai-Cheong; Shu, Lianjie; Tsui, Kwok-Leung

    2012-01-01

    Highlights: ► We develop online prediction models for solar photovoltaic system performance. ► The proposed prediction models are simple but with reasonable accuracy. ► The maximum monthly average minutely efficiency varies 10.81–12.63%. ► The average efficiency tends to be slightly higher in winter months. - Abstract: This paper develops new real time prediction models for output power and energy efficiency of solar photovoltaic (PV) systems. These models were validated using measured data of a grid-connected solar PV system in Macau. Both time frames based on yearly average and monthly average are considered. It is shown that the prediction model for the yearly/monthly average of the minutely output power fits the measured data very well with high value of R 2 . The online prediction model for system efficiency is based on the ratio of the predicted output power to the predicted solar irradiance. This ratio model is shown to be able to fit the intermediate phase (9 am to 4 pm) very well but not accurate for the growth and decay phases where the system efficiency is near zero. However, it can still serve as a useful purpose for practitioners as most PV systems work in the most efficient manner over this period. It is shown that the maximum monthly average minutely efficiency varies over a small range of 10.81% to 12.63% in different months with slightly higher efficiency in winter months.

  4. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    Science.gov (United States)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  5. Computationally efficient model predictive control algorithms a neural network approach

    CERN Document Server

    Ławryńczuk, Maciej

    2014-01-01

    This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: ·         A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. ·         Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. ·         The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). ·         The MPC algorithms with neural approximation with no on-line linearization. ·         The MPC algorithms with guaranteed stability and robustness. ·         Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...

  6. The assessment of different models to predict solar module temperature, output power and efficiency for Nis, Serbia

    International Nuclear Information System (INIS)

    Pantic, Lana S.; Pavlović, Tomislav M.; Milosavljević, Dragana D.; Radonjic, Ivana S.; Radovic, Miodrag K.; Sazhko, Galina

    2016-01-01

    Five different models for calculating solar module temperature, output power and efficiency for sunny days with different solar radiation intensities and ambient temperatures are assessed in this paper. Thereafter, modeled values are compared to the experimentally obtained values for the horizontal solar module in Nis, Serbia. The criterion for determining the best model was based on the statistical analysis and the agreement between the calculated and the experimental values. The calculated values of solar module temperature are in good agreement with the experimentally obtained ones, with some variations over and under the measured values. The best agreement between calculated and experimentally obtained values was for summer months with high solar radiation intensity. The nonlinear model for calculating the output power is much better than the linear model and at the same time predicts better the total electrical energy generated by the solar module during the day. The nonlinear model for calculating the solar module efficiency predicts the efficiency higher than the STC (Standard Test Conditions) value of solar module efficiency for all conditions, while the linear model predicts the solar module efficiency very well. This paper provides a simple and efficient guideline to estimate relevant parameters of a monocrystalline silicon solar module under the moderate-continental climate conditions. - Highlights: • Linear model for solar module temperature gives accurate predictions for August. • The nonlinear model better predicts the solar module power than the linear model. • For calculating solar module power for Nis we propose the nonlinear model. • For calculating solar model efficiency for Nis we propose adoption of linear model. • The adopted models can be used for calculations throughout the year.

  7. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  8. FIRE BEHAVIOR PREDICTING MODELS EFFICIENCY IN BRAZILIAN COMMERCIAL EUCALYPT PLANTATIONS

    Directory of Open Access Journals (Sweden)

    Benjamin Leonardo Alves White

    2016-12-01

    Full Text Available Knowing how a wildfire will behave is extremely important in order to assist in fire suppression and prevention operations. Since the 1940’s mathematical models to estimate how the fire will behave have been developed worldwide, however, none of them, until now, had their efficiency tested in Brazilian commercial eucalypt plantations nor in other vegetation types in the country. This study aims to verify the accuracy of the Rothermel (1972 fire spread model, the Byram (1959 flame length model, and the fire spread and length equations derived from the McArthur (1962 control burn meters. To meet these objectives, 105 experimental laboratory fires were done and their results compared with the predicted values from the models tested. The Rothermel and Byram models predicted better than McArthur’s, nevertheless, all of them underestimated the fire behavior aspects evaluated and were statistically different from the experimental data.

  9. Numerical flow simulation and efficiency prediction for axial turbines by advanced turbulence models

    International Nuclear Information System (INIS)

    Jošt, D; Škerlavaj, A; Lipej, A

    2012-01-01

    Numerical prediction of an efficiency of a 6-blade Kaplan turbine is presented. At first, the results of steady state analysis performed by different turbulence models for different operating regimes are compared to the measurements. For small and optimal angles of runner blades the efficiency was quite accurately predicted, but for maximal blade angle the discrepancy between calculated and measured values was quite large. By transient analysis, especially when the Scale Adaptive Simulation Shear Stress Transport (SAS SST) model with zonal Large Eddy Simulation (ZLES) in the draft tube was used, the efficiency was significantly improved. The improvement was at all operating points, but it was the largest for maximal discharge. The reason was better flow simulation in the draft tube. Details about turbulent structure in the draft tube obtained by SST, SAS SST and SAS SST with ZLES are illustrated in order to explain the reasons for differences in flow energy losses obtained by different turbulence models.

  10. Numerical flow simulation and efficiency prediction for axial turbines by advanced turbulence models

    Science.gov (United States)

    Jošt, D.; Škerlavaj, A.; Lipej, A.

    2012-11-01

    Numerical prediction of an efficiency of a 6-blade Kaplan turbine is presented. At first, the results of steady state analysis performed by different turbulence models for different operating regimes are compared to the measurements. For small and optimal angles of runner blades the efficiency was quite accurately predicted, but for maximal blade angle the discrepancy between calculated and measured values was quite large. By transient analysis, especially when the Scale Adaptive Simulation Shear Stress Transport (SAS SST) model with zonal Large Eddy Simulation (ZLES) in the draft tube was used, the efficiency was significantly improved. The improvement was at all operating points, but it was the largest for maximal discharge. The reason was better flow simulation in the draft tube. Details about turbulent structure in the draft tube obtained by SST, SAS SST and SAS SST with ZLES are illustrated in order to explain the reasons for differences in flow energy losses obtained by different turbulence models.

  11. Model Predictive Vibration Control Efficient Constrained MPC Vibration Control for Lightly Damped Mechanical Structures

    CERN Document Server

    Takács, Gergely

    2012-01-01

    Real-time model predictive controller (MPC) implementation in active vibration control (AVC) is often rendered difficult by fast sampling speeds and extensive actuator-deformation asymmetry. If the control of lightly damped mechanical structures is assumed, the region of attraction containing the set of allowable initial conditions requires a large prediction horizon, making the already computationally demanding on-line process even more complex. Model Predictive Vibration Control provides insight into the predictive control of lightly damped vibrating structures by exploring computationally efficient algorithms which are capable of low frequency vibration control with guaranteed stability and constraint feasibility. In addition to a theoretical primer on active vibration damping and model predictive control, Model Predictive Vibration Control provides a guide through the necessary steps in understanding the founding ideas of predictive control applied in AVC such as: ·         the implementation of ...

  12. Genomic Prediction of Manganese Efficiency in Winter Barley

    Directory of Open Access Journals (Sweden)

    Florian Leplat

    2016-07-01

    Full Text Available Manganese efficiency is a quantitative abiotic stress trait controlled by several genes each with a small effect. Manganese deficiency leads to yield reduction in winter barley ( L.. Breeding new cultivars for this trait remains difficult because of the lack of visual symptoms and the polygenic features of the trait. Hence, Mn efficiency is a potential suitable trait for a genomic selection (GS approach. A collection of 248 winter barley varieties was screened for Mn efficiency using Chlorophyll (Chl fluorescence in six environments prone to induce Mn deficiency. Two models for genomic prediction were implemented to predict future performance and breeding value of untested varieties. Predictions were obtained using multivariate mixed models: best linear unbiased predictor (BLUP and genomic best linear unbiased predictor (G-BLUP. In the first model, predictions were based on the phenotypic evaluation, whereas both phenotypic and genomic marker data were included in the second model. Accuracy of predicting future phenotype, , and accuracy of predicting true breeding values, , were calculated and compared for both models using six cross-validation (CV schemes; these were designed to mimic plant breeding programs. Overall, the CVs showed that prediction accuracies increased when using the G-BLUP model compared with the prediction accuracies using the BLUP model. Furthermore, the accuracies [] of predicting breeding values were more accurate than accuracy of predicting future phenotypes []. The study confirms that genomic data may enhance the prediction accuracy. Moreover it indicates that GS is a suitable breeding approach for quantitative abiotic stress traits.

  13. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  14. Applied Distributed Model Predictive Control for Energy Efficient Buildings and Ramp Metering

    Science.gov (United States)

    Koehler, Sarah Muraoka

    Industrial large-scale control problems present an interesting algorithmic design challenge. A number of controllers must cooperate in real-time on a network of embedded hardware with limited computing power in order to maximize system efficiency while respecting constraints and despite communication delays. Model predictive control (MPC) can automatically synthesize a centralized controller which optimizes an objective function subject to a system model, constraints, and predictions of disturbance. Unfortunately, the computations required by model predictive controllers for large-scale systems often limit its industrial implementation only to medium-scale slow processes. Distributed model predictive control (DMPC) enters the picture as a way to decentralize a large-scale model predictive control problem. The main idea of DMPC is to split the computations required by the MPC problem amongst distributed processors that can compute in parallel and communicate iteratively to find a solution. Some popularly proposed solutions are distributed optimization algorithms such as dual decomposition and the alternating direction method of multipliers (ADMM). However, these algorithms ignore two practical challenges: substantial communication delays present in control systems and also problem non-convexity. This thesis presents two novel and practically effective DMPC algorithms. The first DMPC algorithm is based on a primal-dual active-set method which achieves fast convergence, making it suitable for large-scale control applications which have a large communication delay across its communication network. In particular, this algorithm is suited for MPC problems with a quadratic cost, linear dynamics, forecasted demand, and box constraints. We measure the performance of this algorithm and show that it significantly outperforms both dual decomposition and ADMM in the presence of communication delay. The second DMPC algorithm is based on an inexact interior point method which is

  15. Modeling of venturi scrubber efficiency

    Science.gov (United States)

    Crowder, Jerry W.; Noll, Kenneth E.; Davis, Wayne T.

    The parameters affecting venturi scrubber performance have been rationally examined and modifications to the current modeling theory have been developed. The modified model has been validated with available experimental data for a range of throat gas velocities, liquid-to-gas ratios and particle diameters and is used to study the effect of some design parameters on collection efficiency. Most striking among the observations is the prediction of a new design parameter termed the minimum contactor length. Also noted is the prediction of little effect on collection efficiency with increasing liquid-to-gas ratio above about 2ℓ m-3. Indeed, for some cases a decrease in collection efficiency is predicted for liquid rates above this value.

  16. Tools for Predicting Cleaning Efficiency in the LHC

    CERN Document Server

    Assmann, R W; Brugger, M; Hayes, M; Jeanneret, J B; Kain, V; Kaltchev, D I; Schmidt, F

    2003-01-01

    The computer codes SIXTRACK and DIMAD have been upgraded to include realistic models of proton scattering in collimator jaws, mechanical aperture restrictions, and time-dependent fields. These new tools complement long-existing simplified linear tracking programs used up to now for tracking with collimators. Scattering routines from STRUCT and K2 have been compared with one another and the results have been cross-checked to the FLUKA Monte Carlo package. A systematic error is assigned to the predictions of cleaning efficiency. Now, predictions of the cleaning efficiency are possible with a full LHC model, including chromatic effects, linear and nonlinear errors, beam-beam kicks and associated diffusion, and time-dependent fields. The beam loss can be predicted around the ring, both for regular and irregular beam losses. Examples are presented.

  17. A Calibrated Lumped Element Model for the Prediction of PSJ Actuator Efficiency Performance

    Directory of Open Access Journals (Sweden)

    Matteo Chiatto

    2018-03-01

    Full Text Available Among the various active flow control techniques, Plasma Synthetic Jet (PSJ actuators, or Sparkjets, represent a very promising technology, especially because of their high velocities and short response times. A practical tool, employed for design and manufacturing purposes, consists of the definition of a low-order model, lumped element model (LEM, which is able to predict the dynamic response of the actuator in a relatively quick way and with reasonable fidelity and accuracy. After a brief description of an innovative lumped model, this work faces the experimental investigation of a home-designed and manufactured PSJ actuator, for different frequencies and energy discharges. Particular attention has been taken in the power supply system design. A specific home-made Pitot tube has allowed the detection of velocity profiles along the jet radial direction, for various energy discharges, as well as the tuning of the lumped model with experimental data, where the total device efficiency has been assumed as a fitting parameter. The best fitting value not only contains information on the actual device efficiency, but includes some modeling and experimental uncertainties, related also to the used measurement technique.

  18. Computational Efficient Upscaling Methodology for Predicting Thermal Conductivity of Nuclear Waste forms

    International Nuclear Information System (INIS)

    Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.

    2011-01-01

    This study evaluated different upscaling methods to predict thermal conductivity in loaded nuclear waste form, a heterogeneous material system. The efficiency and accuracy of these methods were compared. Thermal conductivity in loaded nuclear waste form is an important property specific to scientific researchers, in waste form Integrated performance and safety code (IPSC). The effective thermal conductivity obtained from microstructure information and local thermal conductivity of different components is critical in predicting the life and performance of waste form during storage. How the heat generated during storage is directly related to thermal conductivity, which in turn determining the mechanical deformation behavior, corrosion resistance and aging performance. Several methods, including the Taylor model, Sachs model, self-consistent model, and statistical upscaling models were developed and implemented. Due to the absence of experimental data, prediction results from finite element method (FEM) were used as reference to determine the accuracy of different upscaling models. Micrographs from different loading of nuclear waste were used in the prediction of thermal conductivity. Prediction results demonstrated that in term of efficiency, boundary models (Taylor and Sachs model) are better than self consistent model, statistical upscaling method and FEM. Balancing the computation resource and accuracy, statistical upscaling is a computational efficient method in predicting effective thermal conductivity for nuclear waste form.

  19. Development of a prediction model for the cost saving potentials in implementing the building energy efficiency rating certification

    International Nuclear Information System (INIS)

    Jeong, Jaewook; Hong, Taehoon; Ji, Changyoon; Kim, Jimin; Lee, Minhyun; Jeong, Kwangbok; Koo, Choongwan

    2017-01-01

    Highlights: • This study evaluates the building energy efficiency rating (BEER) certification. • Prediction model was developed for cost saving potentials by the BEER certification. • Prediction model was developed using LCC analysis, ROV, and Monte Carlo simulation. • Cost saving potential was predicted to be 2.78–3.77% of the construction cost. • Cost saving potential can be used for estimating the investment value of BEER. - Abstract: Building energy efficiency rating (BEER) certification is an energy performance certificates (EPCs) in South Korea. It is critical to examine the cost saving potentials of the BEER-certification in advance. This study aimed to develop a prediction model for the cost saving potentials in implementing the BEER-certification, in which the cost saving potentials included the energy cost savings of the BEER-certification and the relevant CO_2 emissions reduction as well as the additional construction cost for the BEER-certification. The prediction model was developed by using data mining, life cycle cost analysis, real option valuation, and Monte Carlo simulation. The database were established with 437 multi-family housing complexes (MFHCs), including 116 BEER-certified MFHCs and 321 non-certified MFHCs. The case study was conducted to validate the developed prediction model using 321 non-certified MFHCs, which considered 20-year life cycle. As a result, compared to the additional construction cost, the average cost saving potentials of the 1st-BEER-certified MFHCs in Groups 1, 2, and 3 were predicted to be 3.77%, 2.78%, and 2.87%, respectively. The cost saving potentials can be used as a guideline for the additional construction cost of the BEER-certification in the early design phase.

  20. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    Science.gov (United States)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  1. Incremental validity of positive orientation: predictive efficiency beyond the five-factor model

    Directory of Open Access Journals (Sweden)

    Łukasz Roland Miciuk

    2016-05-01

    Full Text Available Background The relation of positive orientation (a basic predisposition to think positively of oneself, one’s life and one’s future and personality traits is still disputable. The purpose of the described research was to verify the hypothesis that positive orientation has predictive efficiency beyond the five-factor model. Participants and procedure One hundred and thirty participants (at the mean age M = 24.84 completed the following questionnaires: the Self-Esteem Scale (SES, the Satisfaction with Life Scale (SWLS, the Life Orientation Test-Revised (LOT-R, the Positivity Scale (P-SCALE, the NEO Five Factor Inventory (NEO-FFI, the Self-Concept Clarity Scale (SCC, the Generalized Self-Efficacy Scale (GSES and the Life Engagement Test (LET. Results The introduction of positive orientation as an additional predictor in the second step of regression analyses led to better prediction of the following variables: purpose in life, self-concept clarity and generalized self-efficacy. This effect was the strongest for predicting purpose in life (i.e. 14% increment of the explained variance. Conclusions The results confirmed our hypothesis that positive orientation can be characterized by incremental validity – its inclusion in the regression model (in addition to the five main factors of personality increases the amount of explained variance. These findings may provide further evidence for the legitimacy of measuring positive orientation and personality traits separately.

  2. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Directory of Open Access Journals (Sweden)

    Nikola Simidjievski

    Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  3. Sparse RNA folding revisited: space-efficient minimum free energy structure prediction.

    Science.gov (United States)

    Will, Sebastian; Jabbari, Hosna

    2016-01-01

    RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, space-efficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by [Formula: see text], but are typically much smaller. The time complexity of RNA folding is reduced from [Formula: see text] to [Formula: see text]; the space complexity, from [Formula: see text] to [Formula: see text]. Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA-RNA-interaction prediction are expected to profit even stronger than "standard" MFE folding. SparseMFEFold is free

  4. Model predictive control-based efficient energy recovery control strategy for regenerative braking system of hybrid electric bus

    International Nuclear Information System (INIS)

    Li, Liang; Zhang, Yuanbo; Yang, Chao; Yan, Bingjie; Marina Martinez, C.

    2016-01-01

    Highlights: • A 7-degree-of-freedom model of hybrid electric vehicle with regenerative braking system is built. • A modified nonlinear model predictive control strategy is developed. • The particle swarm optimization algorithm is employed to solve the optimization problem. • The proposed control strategy is verified by simulation and hardware-in-loop tests. • Test results verify the effectiveness of the proposed control strategy. - Abstract: As one of the main working modes, the energy recovered with regenerative braking system provides an effective approach so as to greatly improve fuel economy of hybrid electric bus. However, it is still a challenging issue to ensure braking stability while maximizing braking energy recovery. To solve this problem, an efficient energy recovery control strategy is proposed based on the modified nonlinear model predictive control method. Firstly, combined with the characteristics of the compound braking process of single-shaft parallel hybrid electric bus, a 7 degrees of freedom model of the vehicle longitudinal dynamics is built. Secondly, considering nonlinear characteristic of the vehicle model and the efficiency of regenerative braking system, the particle swarm optimization algorithm within the modified nonlinear model predictive control is adopted to optimize the torque distribution between regenerative braking system and pneumatic braking system at the wheels. So as to reduce the computational time of modified nonlinear model predictive control, a nearest point method is employed during the braking process. Finally, the simulation and hardware-in-loop test are carried out on road conditions with different tire–road adhesion coefficients, and the proposed control strategy is verified by comparing it with the conventional control method employed in the baseline vehicle controller. The simulation and hardware-in-loop test results show that the proposed strategy can ensure vehicle safety during emergency braking

  5. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  6. Energy-Efficient Integration of Continuous Context Sensing and Prediction into Smartwatches

    Directory of Open Access Journals (Sweden)

    Reza Rawassizadeh

    2015-09-01

    Full Text Available As the availability and use of wearables increases, they are becoming a promising platform for context sensing and context analysis. Smartwatches are a particularly interesting platform for this purpose, as they offer salient advantages, such as their proximity to the human body. However, they also have limitations associated with their small form factor, such as processing power and battery life, which makes it difficult to simply transfer smartphone-based context sensing and prediction models to smartwatches. In this paper, we introduce an energy-efficient, generic, integrated framework for continuous context sensing and prediction on smartwatches. Our work extends previous approaches for context sensing and prediction on wrist-mounted wearables that perform predictive analytics outside the device. We offer a generic sensing module and a novel energy-efficient, on-device prediction module that is based on a semantic abstraction approach to convert sensor data into meaningful information objects, similar to human perception of a behavior. Through six evaluations, we analyze the energy efficiency of our framework modules, identify the optimal file structure for data access and demonstrate an increase in accuracy of prediction through our semantic abstraction method. The proposed framework is hardware independent and can serve as a reference model for implementing context sensing and prediction on small wearable devices beyond smartwatches, such as body-mounted cameras.

  7. Energy-Efficient Integration of Continuous Context Sensing and Prediction into Smartwatches.

    Science.gov (United States)

    Rawassizadeh, Reza; Tomitsch, Martin; Nourizadeh, Manouchehr; Momeni, Elaheh; Peery, Aaron; Ulanova, Liudmila; Pazzani, Michael

    2015-09-08

    As the availability and use of wearables increases, they are becoming a promising platform for context sensing and context analysis. Smartwatches are a particularly interesting platform for this purpose, as they offer salient advantages, such as their proximity to the human body. However, they also have limitations associated with their small form factor, such as processing power and battery life, which makes it difficult to simply transfer smartphone-based context sensing and prediction models to smartwatches. In this paper, we introduce an energy-efficient, generic, integrated framework for continuous context sensing and prediction on smartwatches. Our work extends previous approaches for context sensing and prediction on wrist-mounted wearables that perform predictive analytics outside the device. We offer a generic sensing module and a novel energy-efficient, on-device prediction module that is based on a semantic abstraction approach to convert sensor data into meaningful information objects, similar to human perception of a behavior. Through six evaluations, we analyze the energy efficiency of our framework modules, identify the optimal file structure for data access and demonstrate an increase in accuracy of prediction through our semantic abstraction method. The proposed framework is hardware independent and can serve as a reference model for implementing context sensing and prediction on small wearable devices beyond smartwatches, such as body-mounted cameras.

  8. Resource competition model predicts zonation and increasing nutrient use efficiency along a wetland salinity gradient

    Science.gov (United States)

    Schoolmaster, Donald; Stagg, Camille L.

    2018-01-01

    A trade-off between competitive ability and stress tolerance has been hypothesized and empirically supported to explain the zonation of species across stress gradients for a number of systems. Since stress often reduces plant productivity, one might expect a pattern of decreasing productivity across the zones of the stress gradient. However, this pattern is often not observed in coastal wetlands that show patterns of zonation along a salinity gradient. To address the potentially complex relationship between stress, zonation, and productivity in coastal wetlands, we developed a model of plant biomass as a function of resource competition and salinity stress. Analysis of the model confirms the conventional wisdom that a trade-off between competitive ability and stress tolerance is a necessary condition for zonation. It also suggests that a negative relationship between salinity and production can be overcome if (1) the supply of the limiting resource increases with greater salinity stress or (2) nutrient use efficiency increases with increasing salinity. We fit the equilibrium solution of the dynamic model to data from Louisiana coastal wetlands to test its ability to explain patterns of production across the landscape gradient and derive predictions that could be tested with independent data. We found support for a number of the model predictions, including patterns of decreasing competitive ability and increasing nutrient use efficiency across a gradient from freshwater to saline wetlands. In addition to providing a quantitative framework to support the mechanistic hypotheses of zonation, these results suggest that this simple model is a useful platform to further build upon, simulate and test mechanistic hypotheses of more complex patterns and phenomena in coastal wetlands.

  9. An efficient numerical target strength prediction model: Validation against analysis solutions

    NARCIS (Netherlands)

    Fillinger, L.; Nijhof, M.J.J.; Jong, C.A.F. de

    2014-01-01

    A decade ago, TNO developed RASP (Rapid Acoustic Signature Prediction), a numerical model for the prediction of the target strength of immersed underwater objects. The model is based on Kirchhoff diffraction theory. It is currently being improved to model refraction, angle dependent reflection and

  10. ARCH Models Efficiency Evaluation in Prediction and Poultry Price Process Formation

    Directory of Open Access Journals (Sweden)

    Behzad Fakari Sardehae

    2016-09-01

    . This study shows that the heterogeneous variance exists in error term and indicated by LM-test. Results and Discussion: Results showed that stationary test of the poultry price has a unit root and is stationary with one lag difference, and thus the price of poultry was used in the study by one lag difference. Main results showed that ARCH is the best model for fluctuation prediction. Moreover, news has asymmetric effect on poultry price fluctuation and good news has a stronger effect on poultry price fluctuation than bad news and leverage effect doesnot existin poultry price. Moreover current fluctuation does not transmit to future. One of the main assumptions of time series models is constant variance in estimated coefficients. If this assumption has not been, the estimated coefficients for the correlation between the serial data would be biased and results in wrong interpretation. The results showed that ARCH effects existed in error terms of poultry price and so the ARCH family with student t distribution should be used. Normality test of error term and exam of heterogeneous variance needed and lack of attention to its cause false conclusion. Result showed that ARCH models have good predictive power and ARMA models are less efficient than ARCH models. It shows that non-linear predictions are better than linear prediction. According to the results that student distribution should be used as target distribution in estimated patterns. Conclusion: Huge need for poultry, require the creation of infrastructure to response to demands. Results showed that change in poultry price volatility over time, may intensifies at anytime. The asymmetric effect of good and bad news in poultry price leading to consumer's reaction. The good news had significant effects on the poultry market and created positive change in the poultry price, but the bad news did not result insignificant effects. In fact, because the poultry product in the household portfolio is essential, it should not

  11. Neural and hybrid modeling: an alternative route to efficiently predict the behavior of biotechnological processes aimed at biofuels obtainment.

    Science.gov (United States)

    Curcio, Stefano; Saraceno, Alessandra; Calabrò, Vincenza; Iorio, Gabriele

    2014-01-01

    The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved.

  12. Comparison of particle-wall interaction boundary conditions in the prediction of cyclone collection efficiency in computational fluid dynamics (CFD) modeling

    International Nuclear Information System (INIS)

    Valverde Ramirez, M.; Coury, J.R.; Goncalves, J.A.S.

    2009-01-01

    In recent years, many computational fluid dynamics (CFD) studies have appeared attempting to predict cyclone pressure drop and collection efficiency. While these studies have been able to predict pressure drop well, they have been only moderately successful in predicting collection efficiency. Part of the reason for this failure has been attributed to the relatively simple wall boundary conditions implemented in the commercially available CFD software, which are not capable of accurately describing the complex particle-wall interaction present in a cyclone. According, researches have proposed a number of different boundary conditions in order to improve the model performance. This work implemented the critical velocity boundary condition through a user defined function (UDF) in the Fluent software and compared its predictions both with experimental data and with the predictions obtained when using Fluent's built-in boundary conditions. Experimental data was obtained from eight laboratory scale cyclones with varying geometric ratios. The CFD simulations were made using the software Fluent 6.3.26. (author)

  13. Wind Turbine Generator Efficiency Based on Powertrain Combination and Annual Power Generation Prediction

    Directory of Open Access Journals (Sweden)

    Dongmyung Kim

    2018-05-01

    Full Text Available Wind turbine generators are eco-friendly generators that produce electric energy using wind energy. In this study, wind turbine generator efficiency is examined using a powertrain combination and annual power generation prediction, by employing an analysis model. Performance testing was conducted in order to analyze the efficiency of a hydraulic pump and a motor, which are key components, and so as to verify the analysis model. The annual wind speed occurrence frequency for the expected installation areas was used to predict the annual power generation of the wind turbine generators. It was found that the parallel combination of the induction motors exhibited a higher efficiency when the wind speed was low and the serial combination showed higher efficiency when wind speed was high. The results of predicting the annual power generation considering the regional characteristics showed that the power generation was the highest when the hydraulic motors were designed in parallel and the induction motors were designed in series.

  14. Neural and Hybrid Modeling: An Alternative Route to Efficiently Predict the Behavior of Biotechnological Processes Aimed at Biofuels Obtainment

    Directory of Open Access Journals (Sweden)

    Stefano Curcio

    2014-01-01

    Full Text Available The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved.

  15. Modelling impacts of performance on the probability of reproducing, and thereby on productive lifespan, allow prediction of lifetime efficiency in dairy cows.

    Science.gov (United States)

    Phuong, H N; Blavy, P; Martin, O; Schmidely, P; Friggens, N C

    2016-01-01

    Reproductive success is a key component of lifetime efficiency - which is the ratio of energy in milk (MJ) to energy intake (MJ) over the lifespan, of cows. At the animal level, breeding and feeding management can substantially impact milk yield, body condition and energy balance of cows, which are known as major contributors to reproductive failure in dairy cattle. This study extended an existing lifetime performance model to incorporate the impacts that performance changes due to changing breeding and feeding strategies have on the probability of reproducing and thereby on the productive lifespan, and thus allow the prediction of a cow's lifetime efficiency. The model is dynamic and stochastic, with an individual cow being the unit modelled and one day being the unit of time. To evaluate the model, data from a French study including Holstein and Normande cows fed high-concentrate diets and data from a Scottish study including Holstein cows selected for high and average genetic merit for fat plus protein that were fed high- v. low-concentrate diets were used. Generally, the model consistently simulated productive and reproductive performance of various genotypes of cows across feeding systems. In the French data, the model adequately simulated the reproductive performance of Holsteins but significantly under-predicted that of Normande cows. In the Scottish data, conception to first service was comparably simulated, whereas interval traits were slightly under-predicted. Selection for greater milk production impaired the reproductive performance and lifespan but not lifetime efficiency. The definition of lifetime efficiency used in this model did not include associated costs or herd-level effects. Further works should include such economic indicators to allow more accurate simulation of lifetime profitability in different production scenarios.

  16. Relationship between efficiency and predictability in stock price change

    Science.gov (United States)

    Eom, Cheoljun; Oh, Gabjin; Jung, Woo-Sung

    2008-09-01

    In this study, we evaluate the relationship between efficiency and predictability in the stock market. The efficiency, which is the issue addressed by the weak-form efficient market hypothesis, is calculated using the Hurst exponent and the approximate entropy (ApEn). The predictability corresponds to the hit-rate; this is the rate of consistency between the direction of the actual price change and that of the predicted price change, as calculated via the nearest neighbor prediction method. We determine that the Hurst exponent and the ApEn value are negatively correlated. However, predictability is positively correlated with the Hurst exponent.

  17. Predictive models for PEM-electrolyzer performance using adaptive neuro-fuzzy inference systems

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Steffen [University of Tasmania, Hobart 7001, Tasmania (Australia); Karri, Vishy [Australian College of Kuwait (Kuwait)

    2010-09-15

    Predictive models were built using neural network based Adaptive Neuro-Fuzzy Inference Systems for hydrogen flow rate, electrolyzer system-efficiency and stack-efficiency respectively. A comprehensive experimental database forms the foundation for the predictive models. It is argued that, due to the high costs associated with the hydrogen measuring equipment; these reliable predictive models can be implemented as virtual sensors. These models can also be used on-line for monitoring and safety of hydrogen equipment. The quantitative accuracy of the predictive models is appraised using statistical techniques. These mathematical models are found to be reliable predictive tools with an excellent accuracy of {+-}3% compared with experimental values. The predictive nature of these models did not show any significant bias to either over prediction or under prediction. These predictive models, built on a sound mathematical and quantitative basis, can be seen as a step towards establishing hydrogen performance prediction models as generic virtual sensors for wider safety and monitoring applications. (author)

  18. Predicting acid dew point with a semi-empirical model

    International Nuclear Information System (INIS)

    Xiang, Baixiang; Tang, Bin; Wu, Yuxin; Yang, Hairui; Zhang, Man; Lu, Junfu

    2016-01-01

    Highlights: • The previous semi-empirical models are systematically studied. • An improved thermodynamic correlation is derived. • A semi-empirical prediction model is proposed. • The proposed semi-empirical model is validated. - Abstract: Decreasing the temperature of exhaust flue gas in boilers is one of the most effective ways to further improve the thermal efficiency, electrostatic precipitator efficiency and to decrease the water consumption of desulfurization tower, while, when this temperature is below the acid dew point, the fouling and corrosion will occur on the heating surfaces in the second pass of boilers. So, the knowledge on accurately predicting the acid dew point is essential. By investigating the previous models on acid dew point prediction, an improved thermodynamic correlation formula between the acid dew point and its influencing factors is derived first. And then, a semi-empirical prediction model is proposed, which is validated with the data both in field test and experiment, and comparing with the previous models.

  19. Evaluation of the energy efficiency of enzyme fermentation by mechanistic modeling.

    Science.gov (United States)

    Albaek, Mads O; Gernaey, Krist V; Hansen, Morten S; Stocks, Stuart M

    2012-04-01

    Modeling biotechnological processes is key to obtaining increased productivity and efficiency. Particularly crucial to successful modeling of such systems is the coupling of the physical transport phenomena and the biological activity in one model. We have applied a model for the expression of cellulosic enzymes by the filamentous fungus Trichoderma reesei and found excellent agreement with experimental data. The most influential factor was demonstrated to be viscosity and its influence on mass transfer. Not surprisingly, the biological model is also shown to have high influence on the model prediction. At different rates of agitation and aeration as well as headspace pressure, we can predict the energy efficiency of oxygen transfer, a key process parameter for economical production of industrial enzymes. An inverse relationship between the productivity and energy efficiency of the process was found. This modeling approach can be used by manufacturers to evaluate the enzyme fermentation process for a range of different process conditions with regard to energy efficiency. Copyright © 2011 Wiley Periodicals, Inc.

  20. Urban eco-efficiency and system dynamics modelling

    Energy Technology Data Exchange (ETDEWEB)

    Hradil, P., Email: petr.hradil@vtt.fi

    2012-06-15

    Assessment of urban development is generally based on static models of economic, social or environmental impacts. More advanced dynamic models have been used mostly for prediction of population and employment changes as well as for other macro-economic issues. This feasibility study was arranged to test the potential of system dynamic modelling in assessing eco-efficiency changes during urban development. (orig.)

  1. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  2. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  3. Capability of the "Ball-Berry" model for predicting stomatal conductance and water use efficiency of potato leaves under different irrigation regimes

    DEFF Research Database (Denmark)

    Liu, Fulai; Andersen, Mathias N.; Jensen, Christian Richardt

    2009-01-01

    was used for model parameterization, where measurements of midday leaf gas exchange of potted potatoes were done during progressive soil drying for 2 weeks at tuber initiation and earlier bulking stages. The measured photosynthetic rate (An) was used as an input for the model. To account for the effects......The capability of the ‘Ball-Berry' model (BB-model) in predicting stomatal conductance (gs) and water use efficiency (WUE) of potato (Solanum tuberosum L.) leaves under different irrigation regimes was tested using data from two independent pot experiments in 2004 and 2007. Data obtained from 2004...... of soil water deficits on gs, a simple equation modifying the slope (m) based on the mean soil water potential (Ψs) in the soil columns was incorporated into the original BB-model. Compared with the original BB-model, the modified BB-model showed better predictability for both gs and WUE of potato leaves...

  4. Plateletpheresis efficiency and mathematical correction of software-derived platelet yield prediction: A linear regression and ROC modeling approach.

    Science.gov (United States)

    Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David

    2017-10-01

    Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P Simple correction derived from linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc.

  5. A theoretical model for prediction of deposition efficiency in cold spraying

    International Nuclear Information System (INIS)

    Li Changjiu; Li Wenya; Wang Yuyue; Yang Guanjun; Fukanuma, H.

    2005-01-01

    The deposition behavior of a spray particle stream with a particle size distribution was theoretically examined for cold spraying in terms of deposition efficiency as a function of particle parameters and spray angle. The theoretical relation was established between the deposition efficiency and spray angle. The experiments were conducted by measuring deposition efficiency at different driving gas conditions and different spray angles using gas-atomized copper powder. It was found that the theoretically estimated results agreed reasonably well with the experimental ones. Based on the theoretical model and experimental results, it was revealed that the distribution of particle velocity resulting from particle size distribution influences significantly the deposition efficiency in cold spraying. It was necessary for the majority of particles to achieve a velocity higher than the critical velocity in order to improve the deposition efficiency. The normal component of particle velocity contributed to the deposition of the particle under the off-nomal spray condition. The deposition efficiency of sprayed particles decreased owing to the decrease of the normal velocity component as spray was performed at off-normal angle

  6. Influence of the radial-inflow turbine efficiency prediction on the design and analysis of the Organic Rankine Cycle (ORC) system

    International Nuclear Information System (INIS)

    Song, Jian; Gu, Chun-wei; Ren, Xiaodong

    2016-01-01

    Highlights: • The efficiency prediction is based on the velocity triangle and loss models. • The efficiency selection has a big influence on the working fluid selection. • The efficiency selection has a big influence on system parameter determination. - Abstract: The radial-inflow turbine is a common choice for the power output in the Organic Rankine Cycle (ORC) system. Its efficiency is related to the working fluid property and the system operating condition. Generally, the radial-inflow turbine efficiency is assumed to be a constant value in the conventional ORC system analysis. Few studies focus on the influence of the radial-inflow turbine efficiency selection on the system design and analysis. Actually, the ORC system design and the radial-inflow turbine design are coupled with each other. Different thermal parameters of the ORC system would lead to different radial-inflow turbine design and then different turbine efficiency, and vice versa. Therefore, considering the radial-inflow turbine efficiency prediction in the ORC system design can enhance its reliability and accuracy. In this paper, a one-dimensional analysis model for the radial-inflow turbine in the ORC system is presented. The radial-inflow turbine efficiency prediction in this model is based on the velocity triangle and loss models, rather than a constant efficiency assumption. The influence of the working fluid property and the system operating condition on the turbine performance is evaluated. The thermodynamic analysis of the ORC system with a model predicted turbine efficiency and a constant turbine efficiency is conducted and the results are compared with each other. It indicates that the turbine efficiency selection has a significant influence on the working fluid selection and the system parameter determination.

  7. A Robust Model Predictive Control for efficient thermal management of internal combustion engines

    International Nuclear Information System (INIS)

    Pizzonia, Francesco; Castiglione, Teresa; Bova, Sergio

    2016-01-01

    Highlights: • A Robust Model Predictive Control for ICE thermal management was developed. • The proposed control is effective in decreasing the warm-up time. • The control system reduces coolant flow rate under fully warmed conditions. • The control strategy operates the cooling system around onset of nucleate boiling. • Little on-line computational effort is required. - Abstract: Optimal thermal management of modern internal combustion engines (ICE) is one of the key factors for reducing fuel consumption and CO_2 emissions. These are measured by using standardized driving cycles, like the New European Driving Cycle (NEDC), during which the engine does not reach thermal steady state; engine efficiency and emissions are therefore penalized. Several techniques for improving ICE thermal efficiency were proposed, which range from the use of empirical look-up tables to pulsed pump operation. A systematic approach to the problem is however still missing and this paper aims to bridge this gap. The paper proposes a Robust Model Predictive Control of the coolant flow rate, which makes use of a zero-dimensional model of the cooling system of an ICE. The control methodology incorporates explicitly the model uncertainties and achieves the synthesis of a state-feedback control law that minimizes the “worst case” objective function while taking into account the system constraints, as proposed by Kothare et al. (1996). The proposed control strategy is to adjust the coolant flow rate by means of an electric pump, in order to bring the cooling system to operate around the onset of nucleate boiling: across it during warm-up and above it (nucleate or saturated boiling) under fully warmed conditions. The computationally heavy optimization is carried out off-line, while during the operation of the engine the control parameters are simply picked-up on-line from look-up tables. Owing to the little computational effort required, the resulting control strategy is suitable for

  8. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  9. Including model uncertainty in the model predictive control with output feedback

    Directory of Open Access Journals (Sweden)

    Rodrigues M.A.

    2002-01-01

    Full Text Available This paper addresses the development of an efficient numerical output feedback robust model predictive controller for open-loop stable systems. Stability of the closed loop is guaranteed by using an infinite horizon predictive controller and a stable state observer. The performance and the computational burden of this approach are compared to a robust predictive controller from the literature. The case used for this study is based on an industrial gasoline debutanizer column.

  10. Efficient modeling of vector hysteresis using fuzzy inference systems

    International Nuclear Information System (INIS)

    Adly, A.A.; Abd-El-Hafiz, S.K.

    2008-01-01

    Vector hysteresis models have always been regarded as important tools to determine which multi-dimensional magnetic field-media interactions may be predicted. In the past, considerable efforts have been focused on mathematical modeling methodologies of vector hysteresis. This paper presents an efficient approach based upon fuzzy inference systems for modeling vector hysteresis. Computational efficiency of the proposed approach stems from the fact that the basic non-local memory Preisach-type hysteresis model is approximated by a local memory model. The proposed computational low-cost methodology can be easily integrated in field calculation packages involving massive multi-dimensional discretizations. Details of the modeling methodology and its experimental testing are presented

  11. Spatial extrapolation of light use efficiency model parameters to predict gross primary production

    Directory of Open Access Journals (Sweden)

    Karsten Schulz

    2011-12-01

    Full Text Available To capture the spatial and temporal variability of the gross primary production as a key component of the global carbon cycle, the light use efficiency modeling approach in combination with remote sensing data has shown to be well suited. Typically, the model parameters, such as the maximum light use efficiency, are either set to a universal constant or to land class dependent values stored in look-up tables. In this study, we employ the machine learning technique support vector regression to explicitly relate the model parameters of a light use efficiency model calibrated at several FLUXNET sites to site-specific characteristics obtained by meteorological measurements, ecological estimations and remote sensing data. A feature selection algorithm extracts the relevant site characteristics in a cross-validation, and leads to an individual set of characteristic attributes for each parameter. With this set of attributes, the model parameters can be estimated at sites where a parameter calibration is not possible due to the absence of eddy covariance flux measurement data. This will finally allow a spatially continuous model application. The performance of the spatial extrapolation scheme is evaluated with a cross-validation approach, which shows the methodology to be well suited to recapture the variability of gross primary production across the study sites.

  12. The importance of radiation for semiempirical water-use efficiency models

    Science.gov (United States)

    Boese, Sven; Jung, Martin; Carvalhais, Nuno; Reichstein, Markus

    2017-06-01

    Water-use efficiency (WUE) is a fundamental property for the coupling of carbon and water cycles in plants and ecosystems. Existing model formulations predicting this variable differ in the type of response of WUE to the atmospheric vapor pressure deficit of water (VPD). We tested a representative WUE model on the ecosystem scale at 110 eddy covariance sites of the FLUXNET initiative by predicting evapotranspiration (ET) based on gross primary productivity (GPP) and VPD. We found that introducing an intercept term in the formulation increases model performance considerably, indicating that an additional factor needs to be considered. We demonstrate that this intercept term varies seasonally and we subsequently associate it with radiation. Replacing the constant intercept term with a linear function of global radiation was found to further improve model predictions of ET. Our new semiempirical ecosystem WUE formulation indicates that, averaged over all sites, this radiation term accounts for up to half (39-47 %) of transpiration. These empirical findings challenge the current understanding of water-use efficiency on the ecosystem scale.

  13. Probabilistic application of a fugacity model to predict triclosan fate during wastewater treatment.

    Science.gov (United States)

    Bock, Michael; Lyndall, Jennifer; Barber, Timothy; Fuchsman, Phyllis; Perruchon, Elyse; Capdevielle, Marie

    2010-07-01

    The fate and partitioning of the antimicrobial compound, triclosan, in wastewater treatment plants (WWTPs) is evaluated using a probabilistic fugacity model to predict the range of triclosan concentrations in effluent and secondary biosolids. The WWTP model predicts 84% to 92% triclosan removal, which is within the range of measured removal efficiencies (typically 70% to 98%). Triclosan is predominantly removed by sorption and subsequent settling of organic particulates during primary treatment and by aerobic biodegradation during secondary treatment. Median modeled removal efficiency due to sorption is 40% for all treatment phases and 31% in the primary treatment phase. Median modeled removal efficiency due to biodegradation is 48% for all treatment phases and 44% in the secondary treatment phase. Important factors contributing to variation in predicted triclosan concentrations in effluent and biosolids include influent concentrations, solids concentrations in settling tanks, and factors related to solids retention time. Measured triclosan concentrations in biosolids and non-United States (US) effluent are consistent with model predictions. However, median concentrations in US effluent are over-predicted with this model, suggesting that differences in some aspect of treatment practices not incorporated in the model (e.g., disinfection methods) may affect triclosan removal from effluent. Model applications include predicting changes in environmental loadings associated with new triclosan applications and supporting risk analyses for biosolids-amended land and effluent receiving waters. (c) 2010 SETAC.

  14. Model Predictive Control of a Wave Energy Converter

    DEFF Research Database (Denmark)

    Andersen, Palle; Pedersen, Tom Søndergård; Nielsen, Kirsten Mølgaard

    2015-01-01

    In this paper reactive control and Model Predictive Control (MPC) for a Wave Energy Converter (WEC) are compared. The analysis is based on a WEC from Wave Star A/S designed as a point absorber. The model predictive controller uses wave models based on the dominating sea states combined with a model...... connecting undisturbed wave sequences to sequences of torque. Losses in the conversion from mechanical to electrical power are taken into account in two ways. Conventional reactive controllers are tuned for each sea state with the assumption that the converter has the same efficiency back and forth. MPC...

  15. Two-phased DEA-MLA approach for predicting efficiency of NBA players

    Directory of Open Access Journals (Sweden)

    Radovanović Sandro

    2014-01-01

    Full Text Available In sports, a calculation of efficiency is considered to be one of the most challenging tasks. In this paper, DEA is used to evaluate an efficiency of the NBA players, based on multiple inputs and multiple outputs. The efficiency is evaluated for 26 NBA players at the guard position based on existing data. However, if we want to generate the efficiency for a new player, we would have to re-conduct the DEA analysis. Therefore, to predict the efficiency of a new player, machine learning algorithms are applied. The DEA results are incorporated as an input for the learning algorithms, defining thereby an efficiency frontier function form with high reliability. In this paper, linear regression, neural network, and support vector machines are used to predict an efficiency frontier. The results have shown that neural networks can predict the efficiency with an error less than 1%, and the linear regression with an error less than 2%.

  16. A Predictive Distribution Model for Cooperative Braking System of an Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Hongqiang Guo

    2014-01-01

    Full Text Available A predictive distribution model for a series cooperative braking system of an electric vehicle is proposed, which can solve the real-time problem of the optimum braking force distribution. To get the predictive distribution model, firstly three disciplines of the maximum regenerative energy recovery capability, the maximum generating efficiency and the optimum braking stability are considered, then an off-line process optimization stream is designed, particularly the optimal Latin hypercube design (Opt LHD method and radial basis function neural network (RBFNN are utilized. In order to decouple the variables between different disciplines, a concurrent subspace design (CSD algorithm is suggested. The established predictive distribution model is verified in a dynamic simulation. The off-line optimization results show that the proposed process optimization stream can improve the regenerative energy recovery efficiency, and optimize the braking stability simultaneously. Further simulation tests demonstrate that the predictive distribution model can achieve high prediction accuracy and is very beneficial for the cooperative braking system.

  17. Predicting multiprocessing efficiency on the Cray multiprocessors in a (CTSS) time-sharing environment/application to a 3-D magnetohydrodynamics code

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1988-01-01

    A formula is derived for predicting multiprocessing efficiency on Cray supercomputers equipped with the Cray Time-Sharing System (CTSS). The model is applicable to an intensive time-sharing environment. The actual efficiency estimate depends on three factors: the code size, task length, and job mix. The implementation of multitasking in a three-dimensional plasma magnetohydrodynamics (MHD) code, TEMCO, is discussed. TEMCO solves the primitive one-fluid compressible MHD equations and includes resistive and Hall effects in Ohm's law. Virtually all segments of the main time-integration loop are multitasked. The multiprocessing efficiency model is applied to TEMCO. Excellent agreement is obtained between the actual multiprocessing efficiency and the theoretical prediction

  18. Prediction and design of efficient exciplex emitters for high-efficiency, thermally activated delayed-fluorescence organic light-emitting diodes.

    Science.gov (United States)

    Liu, Xiao-Ke; Chen, Zhan; Zheng, Cai-Jun; Liu, Chuan-Lin; Lee, Chun-Sing; Li, Fan; Ou, Xue-Mei; Zhang, Xiao-Hong

    2015-04-08

    High-efficiency, thermally activated delayed-fluorescence organic light-emitting diodes based on exciplex emitters are demonstrated. The best device, based on a TAPC:DPTPCz emitter, shows a high external quantum efficiency of 15.4%. Strategies for predicting and designing efficient exciplex emitters are also provided. This approach allow prediction and design of efficient exciplex emitters for achieving high-efficiency organic light-emitting diodes, for future use in displays and lighting applications. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  20. Model predictive control technologies for efficient and flexible power consumption in refrigeration systems

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F. S.; Edlund, Kristian

    2012-01-01

    . In this paper we describe a novel economic-optimizing Model Predictive Control (MPC) scheme that reduces operating costs by utilizing the thermal storage capabilities. A nonlinear optimization tool to handle a non-convex cost function is utilized for simulations with validated scenarios. In this way we...... explicitly address advantages from daily variations in outdoor temperature and electricity prices. Secondly, we formulate a new cost function that enables the refrigeration system to contribute with ancillary services to the balancing power market. This involvement can be economically beneficial...... of the system models allows us to describe and handle model as well as prediction uncertainties in this framework. This means we can demonstrate means for robustifying the performance of the controller....

  1. Design of artificial neural networks using a genetic algorithm to predict collection efficiency in venturi scrubbers.

    Science.gov (United States)

    Taheri, Mahboobeh; Mohebbi, Ali

    2008-08-30

    In this study, a new approach for the auto-design of neural networks, based on a genetic algorithm (GA), has been used to predict collection efficiency in venturi scrubbers. The experimental input data, including particle diameter, throat gas velocity, liquid to gas flow rate ratio, throat hydraulic diameter, pressure drop across the venturi scrubber and collection efficiency as an output, have been used to create a GA-artificial neural network (ANN) model. The testing results from the model are in good agreement with the experimental data. Comparison of the results of the GA optimized ANN model with the results from the trial-and-error calibrated ANN model indicates that the GA-ANN model is more efficient. Finally, the effects of operating parameters such as liquid to gas flow rate ratio, throat gas velocity, and particle diameter on collection efficiency were determined.

  2. Comparison of tree types of models for the prediction of final academic achievement

    Directory of Open Access Journals (Sweden)

    Silvana Gasar

    2002-12-01

    Full Text Available For efficient prevention of inappropriate secondary school choices and by that academic failure, school counselors need a tool for the prediction of individual pupil's final academic achievements. Using data mining techniques on pupils' data base and expert modeling, we developed several models for the prediction of final academic achievement in an individual high school educational program. For data mining, we used statistical analyses, clustering and two machine learning methods: developing classification decision trees and hierarchical decision models. Using an expert system shell DEX, an expert system, based on a hierarchical multi-attribute decision model, was developed manually. All the models were validated and evaluated from the viewpoint of their applicability. The predictive accuracy of DEX models and decision trees was equal and very satisfying, as it reached the predictive accuracy of an experienced counselor. With respect on the efficiency and difficulties in developing models, and relatively rapid changing of our education system, we propose that decision trees are used in further development of predictive models.

  3. Model Predictive Control for Connected Hybrid Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Kaijiang Yu

    2015-01-01

    Full Text Available This paper presents a new model predictive control system for connected hybrid electric vehicles to improve fuel economy. The new features of this study are as follows. First, the battery charge and discharge profile and the driving velocity profile are simultaneously optimized. One is energy management for HEV for Pbatt; the other is for the energy consumption minimizing problem of acc control of two vehicles. Second, a system for connected hybrid electric vehicles has been developed considering varying drag coefficients and the road gradients. Third, the fuel model of a typical hybrid electric vehicle is developed using the maps of the engine efficiency characteristics. Fourth, simulations and analysis (under different parameters, i.e., road conditions, vehicle state of charge, etc. are conducted to verify the effectiveness of the method to achieve higher fuel efficiency. The model predictive control problem is solved using numerical computation method: continuation and generalized minimum residual method. Computer simulation results reveal improvements in fuel economy using the proposed control method.

  4. Erratum: Probabilistic application of a fugacity model to predict triclosan fate during wastewater treatment.

    Science.gov (United States)

    Bock, Michael; Lyndall, Jennifer; Barber, Timothy; Fuchsman, Phyllis; Perruchon, Elyse; Capdevielle, Marie

    2010-10-01

    The fate and partitioning of the antimicrobial compound, triclosan, in wastewater treatment plants (WWTPs) is evaluated using a probabilistic fugacity model to predict the range of triclosan concentrations in effluent and secondary biosolids. The WWTP model predicts 84% to 92% triclosan removal, which is within the range of measured removal efficiencies (typically 70% to 98%). Triclosan is predominantly removed by sorption and subsequent settling of organic particulates during primary treatment and by aerobic biodegradation during secondary treatment. Median modeled removal efficiency due to sorption is 40% for all treatment phases and 31% in the primary treatment phase. Median modeled removal efficiency due to biodegradation is 48% for all treatment phases and 44% in the secondary treatment phase. Important factors contributing to variation in predicted triclosan concentrations in effluent and biosolids include influent concentrations, solids concentrations in settling tanks, and factors related to solids retention time. Measured triclosan concentrations in biosolids and non-United States (US) effluent are consistent with model predictions. However, median concentrations in US effluent are over-predicted with this model, suggesting that differences in some aspect of treatment practices not incorporated in the model (e.g., disinfection methods) may affect triclosan removal from effluent. Model applications include predicting changes in environmental loadings associated with new triclosan applications and supporting risk analyses for biosolids-amended land and effluent receiving waters. © 2010 SETAC.

  5. ANNIT - An Efficient Inversion Algorithm based on Prediction Principles

    Science.gov (United States)

    Růžek, B.; Kolář, P.

    2009-04-01

    Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good

  6. TACD: a transportable ant colony discrimination model for corporate bankruptcy prediction

    Science.gov (United States)

    Lalbakhsh, Pooia; Chen, Yi-Ping Phoebe

    2017-05-01

    This paper presents a transportable ant colony discrimination strategy (TACD) to predict corporate bankruptcy, a topic of vital importance that is attracting increasing interest in the field of economics. The proposed algorithm uses financial ratios to build a binary prediction model for companies with the two statuses of bankrupt and non-bankrupt. The algorithm takes advantage of an improved version of continuous ant colony optimisation (CACO) at the core, which is used to create an accurate, simple and understandable linear model for discrimination. This also enables the algorithm to work with continuous values, leading to more efficient learning and adaption by avoiding data discretisation. We conduct a comprehensive performance evaluation on three real-world data sets under a stratified cross-validation strategy. In three different scenarios, TACD is compared with 11 other bankruptcy prediction strategies. We also discuss the efficiency of the attribute selection methods used in the experiments. In addition to its simplicity and understandability, statistical significance tests prove the efficiency of TACD against the other prediction algorithms in both measures of AUC and accuracy.

  7. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  8. Model Predictive Control of Mineral Column Flotation Process

    Directory of Open Access Journals (Sweden)

    Yahui Tian

    2018-06-01

    Full Text Available Column flotation is an efficient method commonly used in the mineral industry to separate useful minerals from ores of low grade and complex mineral composition. Its main purpose is to achieve maximum recovery while ensuring desired product grade. This work addresses a model predictive control design for a mineral column flotation process modeled by a set of nonlinear coupled heterodirectional hyperbolic partial differential equations (PDEs and ordinary differential equations (ODEs, which accounts for the interconnection of well-stirred regions represented by continuous stirred tank reactors (CSTRs and transport systems given by heterodirectional hyperbolic PDEs, with these two regions combined through the PDEs’ boundaries. The model predictive control considers both optimality of the process operations and naturally present input and state/output constraints. For the discrete controller design, spatially varying steady-state profiles are obtained by linearizing the coupled ODE–PDE model, and then the discrete system is obtained by using the Cayley–Tustin time discretization transformation without any spatial discretization and/or without model reduction. The model predictive controller is designed by solving an optimization problem with input and state/output constraints as well as input disturbance to minimize the objective function, which leads to an online-solvable finite constrained quadratic regulator problem. Finally, the controller performance to keep the output at the steady state within the constraint range is demonstrated by simulation studies, and it is concluded that the optimal control scheme presented in this work makes this flotation process more efficient.

  9. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  10. Combining Microbial Enzyme Kinetics Models with Light Use Efficiency Models to Predict CO2 and CH4 Ecosystem Exchange from Flooded and Drained Peatland Systems

    Science.gov (United States)

    Oikawa, P. Y.; Jenerette, D.; Knox, S. H.; Sturtevant, C. S.; Verfaillie, J. G.; Baldocchi, D. D.

    2014-12-01

    Under California's Cap-and-Trade program, companies are looking to invest in land-use practices that will reduce greenhouse gas (GHG) emissions. The Sacramento-San Joaquin River Delta is a drained cultivated peatland system and a large source of CO2. To slow soil subsidence and reduce CO2 emissions, there is growing interest in converting drained peatlands to wetlands. However, wetlands are large sources of CH4 that could offset CO2-based GHG reductions. The goal of our research is to provide accurate measurements and model predictions of the changes in GHG budgets that occur when drained peatlands are restored to wetland conditions. We have installed a network of eddy covariance towers across multiple land use types in the Delta and have been measuring CO2 and CH4 ecosystem exchange for multiple years. In order to upscale these measurements through space and time we are using these data to parameterize and validate a process-based biogeochemical model. To predict gross primary productivity (GPP), we are using a simple light use efficiency (LUE) model which requires estimates of light, leaf area index and air temperature and can explain 90% of the observed variation in GPP in a mature wetland. To predict ecosystem respiration we have adapted the Dual Arrhenius Michaelis-Menten (DAMM) model. The LUE-DAMM model allows accurate simulation of half-hourly net ecosystem exchange (NEE) in a mature wetland (r2=0.85). We are working to expand the model to pasture, rice and alfalfa systems in the Delta. To predict methanogenesis, we again apply a modified DAMM model, using simple enzyme kinetics. However CH4 exchange is complex and we have thus expanded the model to predict not only microbial CH4 production, but also CH4 oxidation, CH4 storage and the physical processes regulating the release of CH4 to the atmosphere. The CH4-DAMM model allows accurate simulation of daily CH4 ecosystem exchange in a mature wetland (r2=0.55) and robust estimates of annual CH4 budgets. The LUE

  11. Real-Time Optimization for Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Edlund, Kristian; Frison, Gianluca

    2012-01-01

    In this paper, we develop an efficient homogeneous and self-dual interior-point method for the linear programs arising in economic model predictive control. To exploit structure in the optimization problems, the algorithm employs a highly specialized Riccati iteration procedure. Simulations show...

  12. An Application on Merton Model in the Non-efficient Market

    Science.gov (United States)

    Feng, Yanan; Xiao, Qingxian

    Merton Model is one of the famous credit risk models. This model presumes that the only source of uncertainty in equity prices is the firm’s net asset value .But the above market condition holds only when the market is efficient which is often been ignored in modern research. Another, the original Merton Model is based on assumptions that in the event of default absolute priority holds, renegotiation is not permitted , liquidation of the firm is costless and in the Merton Model and most of its modified version the default boundary is assumed to be constant which don’t correspond with the reality. So these can influence the level of predictive power of the model. In this paper, we have made some extensions on some of these assumptions underlying the original model. The model is virtually a modification of Merton’s model. In a non-efficient market, we use the stock data to analysis this model. The result shows that the modified model can evaluate the credit risk well in the non-efficient market.

  13. Inferential ecosystem models, from network data to prediction

    Science.gov (United States)

    James S. Clark; Pankaj Agarwal; David M. Bell; Paul G. Flikkema; Alan Gelfand; Xuanlong Nguyen; Eric Ward; Jun Yang

    2011-01-01

    Recent developments suggest that predictive modeling could begin to play a larger role not only for data analysis, but also for data collection. We address the example of efficient wireless sensor networks, where inferential ecosystem models can be used to weigh the value of an observation against the cost of data collection. Transmission costs make observations ‘‘...

  14. Standardizing the performance evaluation of short-term wind prediction models

    DEFF Research Database (Denmark)

    Madsen, Henrik; Pinson, Pierre; Kariniotakis, G.

    2005-01-01

    Short-term wind power prediction is a primary requirement for efficient large-scale integration of wind generation in power systems and electricity markets. The choice of an appropriate prediction model among the numerous available models is not trivial, and has to be based on an objective...... evaluation of model performance. This paper proposes a standardized protocol for the evaluation of short-term wind-poser preciction systems. A number of reference prediction models are also described, and their use for performance comparison is analysed. The use of the protocol is demonstrated using results...... from both on-shore and off-shore wind forms. The work was developed in the frame of the Anemos project (EU R&D project) where the protocol has been used to evaluate more than 10 prediction systems....

  15. Worm gear efficiency model considering misalignment in electric power steering systems

    Directory of Open Access Journals (Sweden)

    S. H. Kim

    2018-05-01

    Full Text Available This study proposes a worm gear efficiency model considering misalignment in electric power steering systems. A worm gear is used in Column type Electric Power Steering (C-EPS systems and an Anti-Rattle Spring (ARS is employed in C-EPS systems in order to prevent rattling when the vehicle goes on a bumpy road. This ARS plays a role of preventing rattling by applying preload to one end of the worm shaft but it also generates undesirable friction by causing misalignment of the worm shaft. In order to propose the worm gear efficiency model considering misalignment, geometrical and tribological analyses were performed in this study. For geometrical analysis, normal load on gear teeth was calculated using output torque, pitch diameter of worm wheel, lead angle and normal pressure angle and this normal load was converted to normal pressure at the contact point. Contact points between the tooth flanks of the worm and worm wheel were obtained by mathematically analyzing the geometry, and Hertz's theory was employed in order to calculate contact area at the contact point. Finally, misalignment by an ARS was also considered into the geometry. Friction coefficients between the tooth flanks were also researched in this study. A pin-on-disk type tribometer was set up to measure friction coefficients and friction coefficients at all conditions were measured by the tribometer. In order to validate the worm gear efficiency model, a worm gear was prepared and the efficiency of the worm gear was predicted by the model. As the final procedure of the study, a worm gear efficiency measurement system was set and the efficiency of the worm gear was measured and the results were compared with the predicted results. The efficiency considering misalignment gives more accurate results than the efficiency without misalignment.

  16. Efficient prediction of ground noise from helicopters and parametric studies based on acoustic mapping

    Directory of Open Access Journals (Sweden)

    Fei WANG

    2018-02-01

    Full Text Available Based on the acoustic mapping, a prediction model for the ground noise radiated from an in-flight helicopter is established. For the enhancement of calculation efficiency, a high-efficiency second-level acoustic radiation model capable of taking the influence of atmosphere absorption on noise into account is first developed by the combination of the point-source idea and the rotor noise radiation characteristics. The comparison between the present model and the direct computation method of noise is done and the high efficiency of the model is validated. Rotor free-wake analysis method and Ffowcs Williams-Hawkings (FW-H equation are applied to the aerodynamics and noise prediction in the present model. Secondly, a database of noise spheres with the characteristic parameters of advance ratio and tip-path-plane angle is established by the helicopter trim model together with a parametric modeling approach. Furthermore, based on acoustic mapping, a method of rapid simulation for the ground noise radiated from an in-flight helicopter is developed. The noise footprint for AH-1 rotor is then calculated and the influence of some parameters including advance ratio and flight path angle on ground noise is deeply analyzed using the developed model. The results suggest that with the increase of advance ratio and flight path angle, the peak noise levels on the ground first increase and then decrease, in the meantime, the maximum Sound Exposure Level (SEL noise on the ground shifts toward the advancing side of rotor. Besides, through the analysis of the effects of longitudinal forces on miss-distance and rotor Blade-Vortex Interaction (BVI noise in descent flight, some meaningful results for reducing the BVI noise on the ground are obtained. Keywords: Acoustic mapping, Helicopter, Noise footprint, Rotor noise, Second-level acoustic radiation model

  17. An Intelligent Model for Stock Market Prediction

    Directory of Open Access Journals (Sweden)

    IbrahimM. Hamed

    2012-08-01

    Full Text Available This paper presents an intelligent model for stock market signal prediction using Multi-Layer Perceptron (MLP Artificial Neural Networks (ANN. Blind source separation technique, from signal processing, is integrated with the learning phase of the constructed baseline MLP ANN to overcome the problems of prediction accuracy and lack of generalization. Kullback Leibler Divergence (KLD is used, as a learning algorithm, because it converges fast and provides generalization in the learning mechanism. Both accuracy and efficiency of the proposed model were confirmed through the Microsoft stock, from wall-street market, and various data sets, from different sectors of the Egyptian stock market. In addition, sensitivity analysis was conducted on the various parameters of the model to ensure the coverage of the generalization issue. Finally, statistical significance was examined using ANOVA test.

  18. Prediction of strontium bromide laser efficiency using cluster and decision tree analysis

    Directory of Open Access Journals (Sweden)

    Iliev Iliycho

    2018-01-01

    Full Text Available Subject of investigation is a new high-powered strontium bromide (SrBr2 vapor laser emitting in multiline region of wavelengths. The laser is an alternative to the atom strontium lasers and electron free lasers, especially at the line 6.45 μm which line is used in surgery for medical processing of biological tissues and bones with minimal damage. In this paper the experimental data from measurements of operational and output characteristics of the laser are statistically processed by means of cluster analysis and tree-based regression techniques. The aim is to extract the more important relationships and dependences from the available data which influence the increase of the overall laser efficiency. There are constructed and analyzed a set of cluster models. It is shown by using different cluster methods that the seven investigated operational characteristics (laser tube diameter, length, supplied electrical power, and others and laser efficiency are combined in 2 clusters. By the built regression tree models using Classification and Regression Trees (CART technique there are obtained dependences to predict the values of efficiency, and especially the maximum efficiency with over 95% accuracy.

  19. Specialization does not predict individual efficiency in an ant.

    Directory of Open Access Journals (Sweden)

    Anna Dornhaus

    2008-11-01

    Full Text Available The ecological success of social insects is often attributed to an increase in efficiency achieved through division of labor between workers in a colony. Much research has therefore focused on the mechanism by which a division of labor is implemented, i.e., on how tasks are allocated to workers. However, the important assumption that specialists are indeed more efficient at their work than generalist individuals--the "Jack-of-all-trades is master of none" hypothesis--has rarely been tested. Here, I quantify worker efficiency, measured as work completed per time, in four different tasks in the ant Temnothorax albipennis: honey and protein foraging, collection of nest-building material, and brood transports in a colony emigration. I show that individual efficiency is not predicted by how specialized workers were on the respective task. Worker efficiency is also not consistently predicted by that worker's overall activity or delay to begin the task. Even when only the worker's rank relative to nestmates in the same colony was used, specialization did not predict efficiency in three out of the four tasks, and more specialized workers actually performed worse than others in the fourth task (collection of sand grains. I also show that the above relationships, as well as median individual efficiency, do not change with colony size. My results demonstrate that in an ant species without morphologically differentiated worker castes, workers may nevertheless differ in their ability to perform different tasks. Surprisingly, this variation is not utilized by the colony--worker allocation to tasks is unrelated to their ability to perform them. What, then, are the adaptive benefits of behavioral specialization, and why do workers choose tasks without regard for whether they can perform them well? We are still far from an understanding of the adaptive benefits of division of labor in social insects.

  20. Efficient polarimetric BRDF model.

    Science.gov (United States)

    Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D

    2015-11-30

    The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.

  1. A polynomial chaos ensemble hydrologic prediction system for efficient parameter inference and robust uncertainty assessment

    Science.gov (United States)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Huang, W.

    2015-11-01

    This paper presents a polynomial chaos ensemble hydrologic prediction system (PCEHPS) for an efficient and robust uncertainty assessment of model parameters and predictions, in which possibilistic reasoning is infused into probabilistic parameter inference with simultaneous consideration of randomness and fuzziness. The PCEHPS is developed through a two-stage factorial polynomial chaos expansion (PCE) framework, which consists of an ensemble of PCEs to approximate the behavior of the hydrologic model, significantly speeding up the exhaustive sampling of the parameter space. Multiple hypothesis testing is then conducted to construct an ensemble of reduced-dimensionality PCEs with only the most influential terms, which is meaningful for achieving uncertainty reduction and further acceleration of parameter inference. The PCEHPS is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability. A detailed comparison between the HYMOD hydrologic model, the ensemble of PCEs, and the ensemble of reduced PCEs is performed in terms of accuracy and efficiency. Results reveal temporal and spatial variations in parameter sensitivities due to the dynamic behavior of hydrologic systems, and the effects (magnitude and direction) of parametric interactions depending on different hydrological metrics. The case study demonstrates that the PCEHPS is capable not only of capturing both expert knowledge and probabilistic information in the calibration process, but also of implementing an acceleration of more than 10 times faster than the hydrologic model without compromising the predictive accuracy.

  2. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning

    Science.gov (United States)

    Fu, QiMing

    2016-01-01

    To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704

  3. SHMF: Interest Prediction Model with Social Hub Matrix Factorization

    Directory of Open Access Journals (Sweden)

    Chaoyuan Cui

    2017-01-01

    Full Text Available With the development of social networks, microblog has become the major social communication tool. There is a lot of valuable information such as personal preference, public opinion, and marketing in microblog. Consequently, research on user interest prediction in microblog has a positive practical significance. In fact, how to extract information associated with user interest orientation from the constantly updated blog posts is not so easy. Existing prediction approaches based on probabilistic factor analysis use blog posts published by user to predict user interest. However, these methods are not very effective for the users who post less but browse more. In this paper, we propose a new prediction model, which is called SHMF, using social hub matrix factorization. SHMF constructs the interest prediction model by combining the information of blogs posts published by both user and direct neighbors in user’s social hub. Our proposed model predicts user interest by integrating user’s historical behavior and temporal factor as well as user’s friendships, thus achieving accurate forecasts of user’s future interests. The experimental results on Sina Weibo show the efficiency and effectiveness of our proposed model.

  4. Bayesian based Prognostic Model for Predictive Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud; Sørensen, John Dalsgaard

    2018-01-01

    The operation and maintenance costs of offshore wind farms can be significantly reduced if existing corrective actions are performed as efficient as possible and if future corrective actions are avoided by performing sufficient preventive actions. In this paper a prognostic model for degradation...... monitoring, fault prediction and predictive maintenance of offshore wind components is defined. The diagnostic model defined in this paper is based on degradation, remaining useful lifetime and hybrid inspection threshold models. The defined degradation model is based on an exponential distribution...

  5. Bayesian based Prognostic Model for Predictive Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud; Sørensen, John Dalsgaard

    2018-01-01

    monitoring, fault prediction and predictive maintenance of offshore wind components is defined. The diagnostic model defined in this paper is based on degradation, remaining useful lifetime and hybrid inspection threshold models. The defined degradation model is based on an exponential distribution......The operation and maintenance costs of offshore wind farms can be significantly reduced if existing corrective actions are performed as efficient as possible and if future corrective actions are avoided by performing sufficient preventive actions. In this paper a prognostic model for degradation...

  6. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  7. Efficient Prediction of Progesterone Receptor Interactome Using a Support Vector Machine Model

    Directory of Open Access Journals (Sweden)

    Ji-Long Liu

    2015-03-01

    Full Text Available Protein-protein interaction (PPI is essential for almost all cellular processes and identification of PPI is a crucial task for biomedical researchers. So far, most computational studies of PPI are intended for pair-wise prediction. Theoretically, predicting protein partners for a single protein is likely a simpler problem. Given enough data for a particular protein, the results can be more accurate than general PPI predictors. In the present study, we assessed the potential of using the support vector machine (SVM model with selected features centered on a particular protein for PPI prediction. As a proof-of-concept study, we applied this method to identify the interactome of progesterone receptor (PR, a protein which is essential for coordinating female reproduction in mammals by mediating the actions of ovarian progesterone. We achieved an accuracy of 91.9%, sensitivity of 92.8% and specificity of 91.2%. Our method is generally applicable to any other proteins and therefore may be of help in guiding biomedical experiments.

  8. A Novel Modelling Approach for Predicting Forest Growth and Yield under Climate Change.

    Directory of Open Access Journals (Sweden)

    M Irfan Ashraf

    Full Text Available Global climate is changing due to increasing anthropogenic emissions of greenhouse gases. Forest managers need growth and yield models that can be used to predict future forest dynamics during the transition period of present-day forests under a changing climatic regime. In this study, we developed a forest growth and yield model that can be used to predict individual-tree growth under current and projected future climatic conditions. The model was constructed by integrating historical tree growth records with predictions from an ecological process-based model using neural networks. The new model predicts basal area (BA and volume growth for individual trees in pure or mixed species forests. For model development, tree-growth data under current climatic conditions were obtained using over 3000 permanent sample plots from the Province of Nova Scotia, Canada. Data to reflect tree growth under a changing climatic regime were projected with JABOWA-3 (an ecological process-based model. Model validation with designated data produced model efficiencies of 0.82 and 0.89 in predicting individual-tree BA and volume growth. Model efficiency is a relative index of model performance, where 1 indicates an ideal fit, while values lower than zero means the predictions are no better than the average of the observations. Overall mean prediction error (BIAS of basal area and volume growth predictions was nominal (i.e., for BA: -0.0177 cm(2 5-year(-1 and volume: 0.0008 m(3 5-year(-1. Model variability described by root mean squared error (RMSE in basal area prediction was 40.53 cm(2 5-year(-1 and 0.0393 m(3 5-year(-1 in volume prediction. The new modelling approach has potential to reduce uncertainties in growth and yield predictions under different climate change scenarios. This novel approach provides an avenue for forest managers to generate required information for the management of forests in transitional periods of climate change. Artificial intelligence

  9. Predictive information speeds up visual awareness in an individuation task by modulating threshold setting, not processing efficiency.

    Science.gov (United States)

    De Loof, Esther; Van Opstal, Filip; Verguts, Tom

    2016-04-01

    Theories on visual awareness claim that predicted stimuli reach awareness faster than unpredicted ones. In the current study, we disentangle whether prior information about the upcoming stimulus affects visual awareness of stimulus location (i.e., individuation) by modulating processing efficiency or threshold setting. Analogous research on stimulus identification revealed that prior information modulates threshold setting. However, as identification and individuation are two functionally and neurally distinct processes, the mechanisms underlying identification cannot simply be extrapolated directly to individuation. The goal of this study was therefore to investigate how individuation is influenced by prior information about the upcoming stimulus. To do so, a drift diffusion model was fitted to estimate the processing efficiency and threshold setting for predicted versus unpredicted stimuli in a cued individuation paradigm. Participants were asked to locate a picture, following a cue that was congruent, incongruent or neutral with respect to the picture's identity. Pictures were individuated faster in the congruent and neutral condition compared to the incongruent condition. In the diffusion model analysis, the processing efficiency was not significantly different across conditions. However, the threshold setting was significantly higher following an incongruent cue compared to both congruent and neutral cues. Our results indicate that predictive information about the upcoming stimulus influences visual awareness by shifting the threshold for individuation rather than by enhancing processing efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. A new computationally-efficient two-dimensional model for boron implantation into single-crystal silicon

    International Nuclear Information System (INIS)

    Klein, K.M.; Park, C.; Yang, S.; Morris, S.; Do, V.; Tasch, F.

    1992-01-01

    We have developed a new computationally-efficient two-dimensional model for boron implantation into single-crystal silicon. This paper reports that this new model is based on the dual Pearson semi-empirical implant depth profile model and the UT-MARLOWE Monte Carlo boron ion implantation model. This new model can predict with very high computational efficiency two-dimensional as-implanted boron profiles as a function of energy, dose, tilt angle, rotation angle, masking edge orientation, and masking edge thickness

  11. Energy efficiency optimisation for distillation column using artificial neural network models

    International Nuclear Information System (INIS)

    Osuolale, Funmilayo N.; Zhang, Jie

    2016-01-01

    This paper presents a neural network based strategy for the modelling and optimisation of energy efficiency in distillation columns incorporating the second law of thermodynamics. Real-time optimisation of distillation columns based on mechanistic models is often infeasible due to the effort in model development and the large computation effort associated with mechanistic model computation. This issue can be addressed by using neural network models which can be quickly developed from process operation data. The computation time in neural network model evaluation is very short making them ideal for real-time optimisation. Bootstrap aggregated neural networks are used in this study for enhanced model accuracy and reliability. Aspen HYSYS is used for the simulation of the distillation systems. Neural network models for exergy efficiency and product compositions are developed from simulated process operation data and are used to maximise exergy efficiency while satisfying products qualities constraints. Applications to binary systems of methanol-water and benzene-toluene separations culminate in a reduction of utility consumption of 8.2% and 28.2% respectively. Application to multi-component separation columns also demonstrate the effectiveness of the proposed method with a 32.4% improvement in the exergy efficiency. - Highlights: • Neural networks can accurately model exergy efficiency in distillation columns. • Bootstrap aggregated neural network offers improved model prediction accuracy. • Improved exergy efficiency is obtained through model based optimisation. • Reductions of utility consumption by 8.2% and 28.2% were achieved for binary systems. • The exergy efficiency for multi-component distillation is increased by 32.4%.

  12. Risk assessment and remedial policy evaluation using predictive modeling

    International Nuclear Information System (INIS)

    Linkov, L.; Schell, W.R.

    1996-01-01

    As a result of nuclear industry operation and accidents, large areas of natural ecosystems have been contaminated by radionuclides and toxic metals. Extensive societal pressure has been exerted to decrease the radiation dose to the population and to the environment. Thus, in making abatement and remediation policy decisions, not only economic costs but also human and environmental risk assessments are desired. This paper introduces a general framework for risk assessment and remedial policy evaluation using predictive modeling. Ecological risk assessment requires evaluation of the radionuclide distribution in ecosystems. The FORESTPATH model is used for predicting the radionuclide fate in forest compartments after deposition as well as for evaluating the efficiency of remedial policies. Time of intervention and radionuclide deposition profile was predicted as being crucial for the remediation efficiency. Risk assessment conducted for a critical group of forest users in Belarus shows that consumption of forest products (berries and mushrooms) leads to about 0.004% risk of a fatal cancer annually. Cost-benefit analysis for forest cleanup suggests that complete removal of organic layer is too expensive for application in Belarus and a better methodology is required. In conclusion, FORESTPATH modeling framework could have wide applications in environmental remediation of radionuclides and toxic metals as well as in dose reconstruction and, risk-assessment

  13. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  14. Predictive modeling of coupled multi-physics systems: I. Theory

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel

    2014-01-01

    Highlights: • We developed “predictive modeling of coupled multi-physics systems (PMCMPS)”. • PMCMPS reduces predicted uncertainties in predicted model responses and parameters. • PMCMPS treats efficiently very large coupled systems. - Abstract: This work presents an innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS).” This methodology takes into account fully the coupling terms between the systems but requires only the computational resources that would be needed to perform predictive modeling on each system separately. The PMCMPS methodology uses the maximum entropy principle to construct an optimal approximation of the unknown a priori distribution based on a priori known mean values and uncertainties characterizing the parameters and responses for both multi-physics models. This “maximum entropy”-approximate a priori distribution is combined, using Bayes’ theorem, with the “likelihood” provided by the multi-physics simulation models. Subsequently, the posterior distribution thus obtained is evaluated using the saddle-point method to obtain analytical expressions for the optimally predicted values for the multi-physics models parameters and responses along with corresponding reduced uncertainties. Noteworthy, the predictive modeling methodology for the coupled systems is constructed such that the systems can be considered sequentially rather than simultaneously, while preserving exactly the same results as if the systems were treated simultaneously. Consequently, very large coupled systems, which could perhaps exceed available computational resources if treated simultaneously, can be treated with the PMCMPS methodology presented in this work sequentially and without any loss of generality or information, requiring just the resources that would be needed if the systems were treated sequentially

  15. Predictable quantum efficient detector based on n-type silicon photodiodes

    Science.gov (United States)

    Dönsberg, Timo; Manoocheri, Farshid; Sildoja, Meelis; Juntunen, Mikko; Savin, Hele; Tuovinen, Esa; Ronkainen, Hannu; Prunnila, Mika; Merimaa, Mikko; Tang, Chi Kwong; Gran, Jarle; Müller, Ingmar; Werner, Lutz; Rougié, Bernard; Pons, Alicia; Smîd, Marek; Gál, Péter; Lolli, Lapo; Brida, Giorgio; Rastello, Maria Luisa; Ikonen, Erkki

    2017-12-01

    The predictable quantum efficient detector (PQED) consists of two custom-made induced junction photodiodes that are mounted in a wedged trap configuration for the reduction of reflectance losses. Until now, all manufactured PQED photodiodes have been based on a structure where a SiO2 layer is thermally grown on top of p-type silicon substrate. In this paper, we present the design, manufacturing, modelling and characterization of a new type of PQED, where the photodiodes have an Al2O3 layer on top of n-type silicon substrate. Atomic layer deposition is used to deposit the layer to the desired thickness. Two sets of photodiodes with varying oxide thicknesses and substrate doping concentrations were fabricated. In order to predict recombination losses of charge carriers, a 3D model of the photodiode was built into Cogenda Genius semiconductor simulation software. It is important to note that a novel experimental method was developed to obtain values for the 3D model parameters. This makes the prediction of the PQED responsivity a completely autonomous process. Detectors were characterized for temperature dependence of dark current, spatial uniformity of responsivity, reflectance, linearity and absolute responsivity at the wavelengths of 488 nm and 532 nm. For both sets of photodiodes, the modelled and measured responsivities were generally in agreement within the measurement and modelling uncertainties of around 100 parts per million (ppm). There is, however, an indication that the modelled internal quantum deficiency may be underestimated by a similar amount. Moreover, the responsivities of the detectors were spatially uniform within 30 ppm peak-to-peak variation. The results obtained in this research indicate that the n-type induced junction photodiode is a very promising alternative to the existing p-type detectors, and thus give additional credibility to the concept of modelled quantum detector serving as a primary standard. Furthermore, the manufacturing of

  16. Efficiency model of Russian banks

    OpenAIRE

    Pavlyuk, Dmitry

    2006-01-01

    The article deals with problems related to the stochastic frontier model of bank efficiency measurement. The model is used to study the efficiency of the banking sector of The Russian Federation. It is based on the stochastic approach both to the efficiency frontier location and to individual bank efficiency values. The model allows estimating bank efficiency values, finding relations with different macro- and microeconomic factors and testing some economic hypotheses.

  17. Birth weight predicted baseline muscular efficiency, but not response of energy expenditure to calorie restriction: An empirical test of the predictive adaptive response hypothesis.

    Science.gov (United States)

    Workman, Megan; Baker, Jack; Lancaster, Jane B; Mermier, Christine; Alcock, Joe

    2016-07-01

    Aiming to test the evolutionary significance of relationships linking prenatal growth conditions to adult phenotypes, this study examined whether birth size predicts energetic savings during fasting. We specifically tested a Predictive Adaptive Response (PAR) model that predicts greater energetic saving among adults who were born small. Data were collected from a convenience sample of young adults living in Albuquerque, NM (n = 34). Indirect calorimetry quantified changes in resting energy expenditure (REE) and active muscular efficiency that occurred in response to a 29-h fast. Multiple regression analyses linked birth weight to baseline and postfast metabolic values while controlling for appropriate confounders (e.g., sex, body mass). Birth weight did not moderate the relationship between body size and energy expenditure, nor did it predict the magnitude change in REE or muscular efficiency observed from baseline to after fasting. Alternative indicators of birth size were also examined (e.g., low v. normal birth weight, comparison of tertiles), with no effects found. However, baseline muscular efficiency improved by 1.1% per 725 g (S.D.) increase in birth weight (P = 0.037). Birth size did not influence the sensitivity of metabolic demands to fasting-neither at rest nor during activity. Moreover, small birth size predicted a reduction in the efficiency with which muscles convert energy expended into work accomplished. These results do not support the ascription of adaptive function to phenotypes associated with small birth size. © 2015 Wiley Periodicals, Inc. Am. J. Hum. Biol. 28:484-492, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  18. Investigating market efficiency through a forecasting model based on differential equations

    Science.gov (United States)

    de Resende, Charlene C.; Pereira, Adriano C. M.; Cardoso, Rodrigo T. N.; de Magalhães, A. R. Bosco

    2017-05-01

    A new differential equation based model for stock price trend forecast is proposed as a tool to investigate efficiency in an emerging market. Its predictive power showed statistically to be higher than the one of a completely random model, signaling towards the presence of arbitrage opportunities. Conditions for accuracy to be enhanced are investigated, and application of the model as part of a trading strategy is discussed.

  19. An efficient descriptor model for designing materials for solar cells

    Science.gov (United States)

    Alharbi, Fahhad H.; Rashkeev, Sergey N.; El-Mellouhi, Fedwa; Lüthi, Hans P.; Tabet, Nouar; Kais, Sabre

    2015-11-01

    An efficient descriptor model for fast screening of potential materials for solar cell applications is presented. It works for both excitonic and non-excitonic solar cells materials, and in addition to the energy gap it includes the absorption spectrum (α(E)) of the material. The charge transport properties of the explored materials are modelled using the characteristic diffusion length (Ld) determined for the respective family of compounds. The presented model surpasses the widely used Scharber model developed for bulk heterojunction solar cells. Using published experimental data, we show that the presented model is more accurate in predicting the achievable efficiencies. To model both excitonic and non-excitonic systems, two different sets of parameters are used to account for the different modes of operation. The analysis of the presented descriptor model clearly shows the benefit of including α(E) and Ld in view of improved screening results.

  20. Efficient multi-scenario Model Predictive Control for water resources management with ensemble streamflow forecasts

    Science.gov (United States)

    Tian, Xin; Negenborn, Rudy R.; van Overloop, Peter-Jules; María Maestre, José; Sadowska, Anna; van de Giesen, Nick

    2017-11-01

    Model Predictive Control (MPC) is one of the most advanced real-time control techniques that has been widely applied to Water Resources Management (WRM). MPC can manage the water system in a holistic manner and has a flexible structure to incorporate specific elements, such as setpoints and constraints. Therefore, MPC has shown its versatile performance in many branches of WRM. Nonetheless, with the in-depth understanding of stochastic hydrology in recent studies, MPC also faces the challenge of how to cope with hydrological uncertainty in its decision-making process. A possible way to embed the uncertainty is to generate an Ensemble Forecast (EF) of hydrological variables, rather than a deterministic one. The combination of MPC and EF results in a more comprehensive approach: Multi-scenario MPC (MS-MPC). In this study, we will first assess the model performance of MS-MPC, considering an ensemble streamflow forecast. Noticeably, the computational inefficiency may be a critical obstacle that hinders applicability of MS-MPC. In fact, with more scenarios taken into account, the computational burden of solving an optimization problem in MS-MPC accordingly increases. To deal with this challenge, we propose the Adaptive Control Resolution (ACR) approach as a computationally efficient scheme to practically reduce the number of control variables in MS-MPC. In brief, the ACR approach uses a mixed-resolution control time step from the near future to the distant future. The ACR-MPC approach is tested on a real-world case study: an integrated flood control and navigation problem in the North Sea Canal of the Netherlands. Such an approach reduces the computation time by 18% and up in our case study. At the same time, the model performance of ACR-MPC remains close to that of conventional MPC.

  1. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    Science.gov (United States)

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  2. Prediction Model of Battery State of Charge and Control Parameter Optimization for Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Bambang Wahono

    2015-07-01

    Full Text Available This paper presents the construction of a battery state of charge (SOC prediction model and the optimization method of the said model to appropriately control the number of parameters in compliance with the SOC as the battery output objectives. Research Centre for Electrical Power and Mechatronics, Indonesian Institute of Sciences has tested its electric vehicle research prototype on the road, monitoring its voltage, current, temperature, time, vehicle velocity, motor speed, and SOC during the operation. Using this experimental data, the prediction model of battery SOC was built. Stepwise method considering multicollinearity was able to efficiently develops the battery prediction model that describes the multiple control parameters in relation to the characteristic values such as SOC. It was demonstrated that particle swarm optimization (PSO succesfully and efficiently calculated optimal control parameters to optimize evaluation item such as SOC based on the model.

  3. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    added information. In this qualitative analysis of a statistically small number of predictions we learned (i) that soft information such as the modeller's system understanding is as important as the model itself (hard information), (ii) that the sequence of modelling steps matters (field visit, interactions between differently experienced experts, choice of model, selection of available data, and methods for parameter guessing), and (iii) that added process understanding can be as efficient as adding data for improving parameters needed to satisfy model requirements.

  4. An efficient model for predicting mixing lengths in serial pumping of petroleum products

    Energy Technology Data Exchange (ETDEWEB)

    Baptista, Renan Martins [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Centro de Pesquisas. Div. de Explotacao]. E-mail: renan@cenpes.petrobras.com.br; Rachid, Felipe Bastos de Freitas [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Dept. de Engenharia Mecanica]. E-mail: rachid@mec.uff.br; Araujo, Jose Henrique Carneiro de [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Dept. de Ciencia da Computacao]. E-mail: jhca@dcc.ic.uff.br

    2000-07-01

    This paper presents a new model for estimating mixing volumes which arises in batching transfers in multi product pipelines. The novel features of the model are the incorporation of the flow rate variation with time and the use of a more precise effective dispersion coefficient, which is considered to depend on the concentration. The governing equation of the model forms a non linear initial value problem that is solved by using a predictor corrector finite difference method. A comparison among the theoretical predictions of the proposed model, a field test and other classical procedures show that it exhibits the best estimate over the whole range of admissible concentrations investigated. (author)

  5. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    Science.gov (United States)

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  6. Economic model predictive control theory, formulations and chemical process applications

    CERN Document Server

    Ellis, Matthew; Christofides, Panagiotis D

    2017-01-01

    This book presents general methods for the design of economic model predictive control (EMPC) systems for broad classes of nonlinear systems that address key theoretical and practical considerations including recursive feasibility, closed-loop stability, closed-loop performance, and computational efficiency. Specifically, the book proposes: Lyapunov-based EMPC methods for nonlinear systems; two-tier EMPC architectures that are highly computationally efficient; and EMPC schemes handling explicitly uncertainty, time-varying cost functions, time-delays and multiple-time-scale dynamics. The proposed methods employ a variety of tools ranging from nonlinear systems analysis, through Lyapunov-based control techniques to nonlinear dynamic optimization. The applicability and performance of the proposed methods are demonstrated through a number of chemical process examples. The book presents state-of-the-art methods for the design of economic model predictive control systems for chemical processes. In addition to being...

  7. Efficient first-principles prediction of solid stability: Towards chemical accuracy

    Science.gov (United States)

    Zhang, Yubo; Kitchaev, Daniil A.; Yang, Julia; Chen, Tina; Dacek, Stephen T.; Sarmiento-Pérez, Rafael A.; Marques, Maguel A. L.; Peng, Haowei; Ceder, Gerbrand; Perdew, John P.; Sun, Jianwei

    2018-03-01

    The question of material stability is of fundamental importance to any analysis of system properties in condensed matter physics and materials science. The ability to evaluate chemical stability, i.e., whether a stoichiometry will persist in some chemical environment, and structure selection, i.e. what crystal structure a stoichiometry will adopt, is critical to the prediction of materials synthesis, reactivity and properties. Here, we demonstrate that density functional theory, with the recently developed strongly constrained and appropriately normed (SCAN) functional, has advanced to a point where both facets of the stability problem can be reliably and efficiently predicted for main group compounds, while transition metal compounds are improved but remain a challenge. SCAN therefore offers a robust model for a significant portion of the periodic table, presenting an opportunity for the development of novel materials and the study of fine phase transformations even in largely unexplored systems with little to no experimental data.

  8. Data-driven modeling and real-time distributed control for energy efficient manufacturing systems

    International Nuclear Information System (INIS)

    Zou, Jing; Chang, Qing; Arinez, Jorge; Xiao, Guoxian

    2017-01-01

    As manufacturers face the challenges of increasing global competition and energy saving requirements, it is imperative to seek out opportunities to reduce energy waste and overall cost. In this paper, a novel data-driven stochastic manufacturing system modeling method is proposed to identify and predict energy saving opportunities and their impact on production. A real-time distributed feedback production control policy, which integrates the current and predicted system performance, is established to improve the overall profit and energy efficiency. A case study is presented to demonstrate the effectiveness of the proposed control policy. - Highlights: • A data-driven stochastic manufacturing system model is proposed. • Real-time system performance and energy saving opportunity identification method is developed. • Prediction method for future potential system performance and energy saving opportunity is developed. • A real-time distributed feedback control policy is established to improve energy efficiency and overall system profit.

  9. Higher energy efficiency and better water quality by using model predictive flow control at water supply systems

    NARCIS (Netherlands)

    Bakker, M.; Verberk, J.Q.J.C.; Palmen, L.J.; Sperber, V.; Bakker, G.

    2011-01-01

    Half of all water supply systems in the Netherlands are controlled by model predictive flow control; the other half are controlled by conventional level based control. The differences between conventional level based control and model predictive control were investigated in experiments at five full

  10. Predicting Power Outages Using Multi-Model Ensemble Forecasts

    Science.gov (United States)

    Cerrai, D.; Anagnostou, E. N.; Yang, J.; Astitha, M.

    2017-12-01

    Power outages affect every year millions of people in the United States, affecting the economy and conditioning the everyday life. An Outage Prediction Model (OPM) has been developed at the University of Connecticut for helping utilities to quickly restore outages and to limit their adverse consequences on the population. The OPM, operational since 2015, combines several non-parametric machine learning (ML) models that use historical weather storm simulations and high-resolution weather forecasts, satellite remote sensing data, and infrastructure and land cover data to predict the number and spatial distribution of power outages. A new methodology, developed for improving the outage model performances by combining weather- and soil-related variables using three different weather models (WRF 3.7, WRF 3.8 and RAMS/ICLAMS), will be presented in this study. First, we will present a performance evaluation of each model variable, by comparing historical weather analyses with station data or reanalysis over the entire storm data set. Hence, each variable of the new outage model version is extracted from the best performing weather model for that variable, and sensitivity tests are performed for investigating the most efficient variable combination for outage prediction purposes. Despite that the final variables combination is extracted from different weather models, this ensemble based on multi-weather forcing and multi-statistical model power outage prediction outperforms the currently operational OPM version that is based on a single weather forcing variable (WRF 3.7), because each model component is the closest to the actual atmospheric state.

  11. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  12. Modelling water uptake efficiency of root systems

    Science.gov (United States)

    Leitner, Daniel; Tron, Stefania; Schröder, Natalie; Bodner, Gernot; Javaux, Mathieu; Vanderborght, Jan; Vereecken, Harry; Schnepf, Andrea

    2016-04-01

    Water uptake is crucial for plant productivity. Trait based breeding for more water efficient crops will enable a sustainable agricultural management under specific pedoclimatic conditions, and can increase drought resistance of plants. Mathematical modelling can be used to find suitable root system traits for better water uptake efficiency defined as amount of water taken up per unit of root biomass. This approach requires large simulation times and large number of simulation runs, since we test different root systems under different pedoclimatic conditions. In this work, we model water movement by the 1-dimensional Richards equation with the soil hydraulic properties described according to the van Genuchten model. Climatic conditions serve as the upper boundary condition. The root system grows during the simulation period and water uptake is calculated via a sink term (after Tron et al. 2015). The goal of this work is to compare different free software tools based on different numerical schemes to solve the model. We compare implementations using DUMUX (based on finite volumes), Hydrus 1D (based on finite elements), and a Matlab implementation of Van Dam, J. C., & Feddes 2000 (based on finite differences). We analyse the methods for accuracy, speed and flexibility. Using this model case study, we can clearly show the impact of various root system traits on water uptake efficiency. Furthermore, we can quantify frequent simplifications that are introduced in the modelling step like considering a static root system instead of a growing one, or considering a sink term based on root density instead of considering the full root hydraulic model (Javaux et al. 2008). References Tron, S., Bodner, G., Laio, F., Ridolfi, L., & Leitner, D. (2015). Can diversity in root architecture explain plant water use efficiency? A modeling study. Ecological modelling, 312, 200-210. Van Dam, J. C., & Feddes, R. A. (2000). Numerical simulation of infiltration, evaporation and shallow

  13. Quantifying the predictive consequences of model error with linear subspace analysis

    Science.gov (United States)

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  14. Model Predictive Control with Constraints of a Wind Turbine

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    2007-01-01

    Model predictive control of wind turbines offer a more systematic approach of constructing controllers that handle constraints while focusing on the main control objective. In this article several controllers are designed for different wind conditions and appropriate switching conditions ensure a...... an efficient control of the wind turbine over the entire range of wind speeds. Both onshore and floating offshore wind turbines are tested with the controllers.......Model predictive control of wind turbines offer a more systematic approach of constructing controllers that handle constraints while focusing on the main control objective. In this article several controllers are designed for different wind conditions and appropriate switching conditions ensure...

  15. DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail

    2016-03-16

    Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational methods for the accurate prediction of potential DTIs. To-date, many computational methods have been proposed for this purpose, but they suffer the drawback of a high rate of false positive predictions. Results Here, we developed a novel computational DTI prediction method, DASPfind. DASPfind uses simple paths of particular lengths inferred from a graph that describes DTIs, similarities between drugs, and similarities between the protein targets of drugs. We show that on average, over the four gold standard DTI datasets, DASPfind significantly outperforms other existing methods when the single top-ranked predictions are considered, resulting in 46.17 % of these predictions being correct, and it achieves 49.22 % correct single top ranked predictions when the set of all DTIs for a single drug is tested. Furthermore, we demonstrate that our method is best suited for predicting DTIs in cases of drugs with no known targets or with few known targets. We also show the practical use of DASPfind by generating novel predictions for the Ion Channel dataset and validating them manually. Conclusions DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery. DASPfind

  16. Seismic attenuation relationship with homogeneous and heterogeneous prediction-error variance models

    Science.gov (United States)

    Mu, He-Qing; Xu, Rong-Rong; Yuen, Ka-Veng

    2014-03-01

    Peak ground acceleration (PGA) estimation is an important task in earthquake engineering practice. One of the most well-known models is the Boore-Joyner-Fumal formula, which estimates the PGA using the moment magnitude, the site-to-fault distance and the site foundation properties. In the present study, the complexity for this formula and the homogeneity assumption for the prediction-error variance are investigated and an efficiency-robustness balanced formula is proposed. For this purpose, a reduced-order Monte Carlo simulation algorithm for Bayesian model class selection is presented to obtain the most suitable predictive formula and prediction-error model for the seismic attenuation relationship. In this approach, each model class (a predictive formula with a prediction-error model) is evaluated according to its plausibility given the data. The one with the highest plausibility is robust since it possesses the optimal balance between the data fitting capability and the sensitivity to noise. A database of strong ground motion records in the Tangshan region of China is obtained from the China Earthquake Data Center for the analysis. The optimal predictive formula is proposed based on this database. It is shown that the proposed formula with heterogeneous prediction-error variance is much simpler than the attenuation model suggested by Boore, Joyner and Fumal (1993).

  17. Numerical prediction of Pelton turbine efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Jott, D; Mez' nar, P; Lipej, A, E-mail: dragicajost@turboinstitut.s [Turbointtitut, Rovtnikova 7, Ljubljana, 1210 (Slovenia)

    2010-08-15

    This paper presents a numerical analysis of flow in a 2 jet Pelton turbine with horizontal axis. The analysis was done for the model at several operating points in different operating regimes. The results were compared to the results of a test of the model. Analysis was performed using ANSYS CFX-12.1 computer code. A k-{omega} SST turbulent model was used. Free surface flow was modelled by two-phase homogeneous model. At first, a steady state analysis of flow in the distributor with two injectors was performed for several needle strokes. This provided us with data on flow energy losses in the distributor and the shape and velocity of jets. The second step was an unsteady analysis of the runner with jets. Torque on the shaft was then calculated from pressure distribution data. Averaged torque values are smaller than measured ones. Consequently, calculated turbine efficiency is also smaller than the measured values, the difference is about 4 %. The shape of the efficiency diagram conforms well to the measurements.

  18. Numerical prediction of Pelton turbine efficiency

    Science.gov (United States)

    Jošt, D.; Mežnar, P.; Lipej, A.

    2010-08-01

    This paper presents a numerical analysis of flow in a 2 jet Pelton turbine with horizontal axis. The analysis was done for the model at several operating points in different operating regimes. The results were compared to the results of a test of the model. Analysis was performed using ANSYS CFX-12.1 computer code. A k-ω SST turbulent model was used. Free surface flow was modelled by two-phase homogeneous model. At first, a steady state analysis of flow in the distributor with two injectors was performed for several needle strokes. This provided us with data on flow energy losses in the distributor and the shape and velocity of jets. The second step was an unsteady analysis of the runner with jets. Torque on the shaft was then calculated from pressure distribution data. Averaged torque values are smaller than measured ones. Consequently, calculated turbine efficiency is also smaller than the measured values, the difference is about 4 %. The shape of the efficiency diagram conforms well to the measurements.

  19. Numerical prediction of Pelton turbine efficiency

    International Nuclear Information System (INIS)

    Jott, D; Mez'nar, P; Lipej, A

    2010-01-01

    This paper presents a numerical analysis of flow in a 2 jet Pelton turbine with horizontal axis. The analysis was done for the model at several operating points in different operating regimes. The results were compared to the results of a test of the model. Analysis was performed using ANSYS CFX-12.1 computer code. A k-ω SST turbulent model was used. Free surface flow was modelled by two-phase homogeneous model. At first, a steady state analysis of flow in the distributor with two injectors was performed for several needle strokes. This provided us with data on flow energy losses in the distributor and the shape and velocity of jets. The second step was an unsteady analysis of the runner with jets. Torque on the shaft was then calculated from pressure distribution data. Averaged torque values are smaller than measured ones. Consequently, calculated turbine efficiency is also smaller than the measured values, the difference is about 4 %. The shape of the efficiency diagram conforms well to the measurements.

  20. IBM SPSS modeler essentials effective techniques for building powerful data mining and predictive analytics solutions

    CERN Document Server

    McCormick, Keith; Wei, Bowen

    2017-01-01

    IBM SPSS Modeler allows quick, efficient predictive analytics and insight building from your data, and is a popularly used data mining tool. This book will guide you through the data mining process, and presents relevant statistical methods which are used to build predictive models and conduct other analytic tasks using IBM SPSS Modeler. From ...

  1. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    Directory of Open Access Journals (Sweden)

    Li Deng

    2015-01-01

    Full Text Available In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming, using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model’s input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators’ operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  2. Use of Neural Networks for modeling and predicting boiler's operating performance

    International Nuclear Information System (INIS)

    Kljajić, Miroslav; Gvozdenac, Dušan; Vukmirović, Srdjan

    2012-01-01

    The need for high boiler operating performance requires the application of improved techniques for the rational use of energy. The analysis presented is guided by an effort to find possibilities for ways energy resources can be used wisely to secure a more efficient final energy supply. However, the biggest challenges are related to the variety and stochastic nature of influencing factors. The paper presents a method for modeling, assessing, and predicting the efficiency of boilers based on measured operating performance. The method utilizes a neural network approach to analyze and predict boiler efficiency and also to discover possibilities for enhancing efficiency. The analysis is based on energy surveys of 65 randomly selected boilers located at over 50 sites in the northern province of Serbia. These surveys included a representative range of industrial, public and commercial users of steam and hot water. The sample covered approximately 25% of all boilers in the province and yielded reliable and relevant results. By creating a database combined with soft computing assistance a wide range of possibilities are created for identifying and assessing factors of influence and making a critical evaluation of practices used on the supply side as a source of identified inefficiency. -- Highlights: ► We develop the model for assessing and predicting efficiency of boilers. ► The method implies the use of Artificial Neural Network approach for analysis. ► The results obtained correspond well to collected and measured data. ► Findings confirm and present good abilities of preventive or proactive approach. ► Analysis reveals and specifies opportunities for increasing efficiency of boilers.

  3. Predicting Equity Markets with Digital Online Media Sentiment: Evidence from Markov-switching Models

    NARCIS (Netherlands)

    Nooijen, S.J.; Broda, S.A.

    2016-01-01

    The authors examine the predictive capabilities of online investor sentiment for the returns and volatility of MSCI U.S. Equity Sector Indices by including exogenous variables in the mean and volatility specifications of a Markov-switching model. As predicted by the semistrong efficient market

  4. Energy saving and prediction modeling of petrochemical industries: A novel ELM based on FAHP

    International Nuclear Information System (INIS)

    Geng, ZhiQiang; Qin, Lin; Han, YongMing; Zhu, QunXiong

    2017-01-01

    Extreme learning machine (ELM), which is a simple single-hidden-layer feed-forward neural network with fast implementation, has been widely applied in many engineering fields. However, it is difficult to enhance the modeling ability of extreme learning in disposing the high-dimensional noisy data. And the predictive modeling method based on the ELM integrated fuzzy C-Means integrating analytic hierarchy process (FAHP) (FAHP-ELM) is proposed. The fuzzy C-Means algorithm is used to cluster the input attributes of the high-dimensional data. The Analytic Hierarchy Process (AHP) based on the entropy weights is proposed to filter the redundant information and extracts characteristic components. Then, the fusion data is used as the input of the ELM. Compared with the back-propagation (BP) neural network and the ELM, the proposed model has better performance in terms of the speed of convergence, generalization and modeling accuracy based on University of California Irvine (UCI) benchmark datasets. Finally, the proposed method was applied to build the energy saving and predictive model of the purified terephthalic acid (PTA) solvent system and the ethylene production system. The experimental results demonstrated the validity of the proposed method. Meanwhile, it could enhance the efficiency of energy utilization and achieve energy conservation and emission reduction. - Highlights: • The ELM integrated FAHP approach is proposed. • The FAHP-ELM prediction model is effectively verified through UCI datasets. • The energy saving and prediction model of petrochemical industries is obtained. • The method is efficient in improvement of energy efficiency and emission reduction.

  5. Development of a thermal control algorithm using artificial neural network models for improved thermal comfort and energy efficiency in accommodation buildings

    International Nuclear Information System (INIS)

    Moon, Jin Woo; Jung, Sung Kwon

    2016-01-01

    Highlights: • An ANN model for predicting optimal start moment of the cooling system was developed. • An ANN model for predicting the amount of cooling energy consumption was developed. • An optimal control algorithm was developed employing two ANN models. • The algorithm showed the advanced thermal comfort and energy efficiency. - Abstract: The aim of this study was to develop a control algorithm to demonstrate the improved thermal comfort and building energy efficiency of accommodation buildings in the cooling season. For this, two artificial neural network (ANN)-based predictive and adaptive models were developed and employed in the algorithm. One model predicted the cooling energy consumption during the unoccupied period for different setback temperatures and the other predicted the time required for restoring current indoor temperature to the normal set-point temperature. Using numerical simulation methods, the prediction accuracy of the two ANN models and the performance of the algorithm were tested. Through the test result analysis, the two ANN models showed their prediction accuracy with an acceptable error rate when applied in the control algorithm. In addition, the two ANN models based algorithm can be used to provide a more comfortable and energy efficient indoor thermal environment than the two conventional control methods, which respectively employed a fixed set-point temperature for the entire day and a setback temperature during the unoccupied period. Therefore, the operating range was 23–26 °C during the occupied period and 25–28 °C during the unoccupied period. Based on the analysis, it can be concluded that the optimal algorithm with two predictive and adaptive ANN models can be used to design a more comfortable and energy efficient indoor thermal environment for accommodation buildings in a comprehensive manner.

  6. Optimal model-free prediction from multivariate time series

    Science.gov (United States)

    Runge, Jakob; Donner, Reik V.; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  7. Bayesian calibration of power plant models for accurate performance prediction

    International Nuclear Information System (INIS)

    Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der

    2014-01-01

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  8. Models for efficient integration of solar energy

    DEFF Research Database (Denmark)

    Bacher, Peder

    the available flexibility in the system. In the present thesis methods related to operation of solar energy systems and for optimal energy use in buildings are presented. Two approaches for forecasting of solar power based on numerical weather predictions (NWPs) are presented, they are applied to forecast......Efficient operation of energy systems with substantial amount of renewable energy production is becoming increasingly important. Renewables are dependent on the weather conditions and are therefore by nature volatile and uncontrollable, opposed to traditional energy production based on combustion....... The "smart grid" is a broad term for the technology for addressing the challenge of operating the grid with a large share of renewables. The "smart" part is formed by technologies, which models the properties of the systems and efficiently adapt the load to the volatile energy production, by using...

  9. Nanotoxicity modelling and removal efficiencies of ZnONP.

    Science.gov (United States)

    Fikirdeşici Ergen, Şeyda; Üçüncü Tunca, Esra

    2018-01-02

    In this paper the aim is to investigate the toxic effect of zinc oxide nanoparticles (ZnONPs) and is to analyze the removal of ZnONP in aqueous medium by the consortium consisted of Daphnia magna and Lemna minor. Three separate test groups are formed: L. minor ([Formula: see text]), D. magna ([Formula: see text]), and L. minor + D. magna ([Formula: see text]) and all these test groups are exposed to three different nanoparticle concentrations ([Formula: see text]). Time-dependent, concentration-dependent, and group-dependent removal efficiencies are statistically compared by non-parametric Mann-Whitney U test and statistically significant differences are observed. The optimum removal values are observed at the highest concentration [Formula: see text] for [Formula: see text], [Formula: see text] for [Formula: see text]and [Formula: see text] for [Formula: see text] and realized at [Formula: see text] for all test groups [Formula: see text]. There is no statistically significant differences in removal at low concentrations [Formula: see text] in terms of groups but [Formula: see text] test groups are more efficient than [Formula: see text] test groups in removal of ZnONP, at [Formula: see text] concentration. Regression analysis is also performed for all prediction models. Different models are tested and it is seen that cubic models show the highest predicted values (R 2 ). In toxicity models, R 2 values are obtained at (0.892, 0.997) interval. A simple solution-phase method is used to synthesize ZnO nanoparticles. Dynamic Light Scattering and X-Ray Diffraction (XRD) are used to detect the particle size of synthesized ZnO nanoparticles.

  10. Energy efficiency of selected OECD countries: A slacks based model with undesirable outputs

    International Nuclear Information System (INIS)

    Apergis, Nicholas; Aye, Goodness C.; Barros, Carlos Pestana; Gupta, Rangan; Wanke, Peter

    2015-01-01

    This paper presents an efficiency assessment of selected OECD countries using a Slacks Based Model with undesirable or bad outputs (SBM-Undesirable). In this research, SBM-Undesirable is used first in a two-stage approach to assess the relative efficiency of OECD countries using the most frequent indicators adopted by the literature on energy efficiency. Besides, in the second stage, GLMM–MCMC methods are combined with SBM-Undesirable results as part of an attempt to produce a model for energy performance with effective predictive ability. The results reveal different impacts of contextual variables, such as economic blocks and capital–labor ratio, on energy efficiency levels. - Highlights: • We analyze the energy efficiency of selected OECD countries. • SBM-Undesirable and MCMC–GLMM are combined for this purpose. • Find that efficiency levels are high but declining over time. • Analysis with contextual variables shows varying efficiency levels across groups. • Capital-intensive countries are more energy efficient than labor-intensive countries.

  11. An adaptive grid to improve the efficiency and accuracy of modelling underwater noise from shipping

    Science.gov (United States)

    Trigg, Leah; Chen, Feng; Shapiro, Georgy; Ingram, Simon; Embling, Clare

    2017-04-01

    Underwater noise from shipping is becoming a significant concern and has been listed as a pollutant under Descriptor 11 of the Marine Strategy Framework Directive. Underwater noise models are an essential tool to assess and predict noise levels for regulatory procedures such as environmental impact assessments and ship noise monitoring. There are generally two approaches to noise modelling. The first is based on simplified energy flux models, assuming either spherical or cylindrical propagation of sound energy. These models are very quick but they ignore important water column and seabed properties, and produce significant errors in the areas subject to temperature stratification (Shapiro et al., 2014). The second type of model (e.g. ray-tracing and parabolic equation) is based on an advanced physical representation of sound propagation. However, these acoustic propagation models are computationally expensive to execute. Shipping noise modelling requires spatial discretization in order to group noise sources together using a grid. A uniform grid size is often selected to achieve either the greatest efficiency (i.e. speed of computations) or the greatest accuracy. In contrast, this work aims to produce efficient and accurate noise level predictions by presenting an adaptive grid where cell size varies with distance from the receiver. The spatial range over which a certain cell size is suitable was determined by calculating the distance from the receiver at which propagation loss becomes uniform across a grid cell. The computational efficiency and accuracy of the resulting adaptive grid was tested by comparing it to uniform 1 km and 5 km grids. These represent an accurate and computationally efficient grid respectively. For a case study of the Celtic Sea, an application of the adaptive grid over an area of 160×160 km reduced the number of model executions required from 25600 for a 1 km grid to 5356 in December and to between 5056 and 13132 in August, which

  12. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  13. Estimation and prediction under local volatility jump-diffusion model

    Science.gov (United States)

    Kim, Namhyoung; Lee, Younhee

    2018-02-01

    Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.

  14. MULTIPLE LINEAR REGRESSION ANALYSIS FOR PREDICTION OF BOILER LOSSES AND BOILER EFFICIENCY

    OpenAIRE

    Chayalakshmi C.L

    2018-01-01

    MULTIPLE LINEAR REGRESSION ANALYSIS FOR PREDICTION OF BOILER LOSSES AND BOILER EFFICIENCY ABSTRACT Calculation of boiler efficiency is essential if its parameters need to be controlled for either maintaining or enhancing its efficiency. But determination of boiler efficiency using conventional method is time consuming and very expensive. Hence, it is not recommended to find boiler efficiency frequently. The work presented in this paper deals with establishing the statistical mo...

  15. Experimental Evaluation of Balance Prediction Models for Sit-to-Stand Movement in the Sagittal Plane

    Directory of Open Access Journals (Sweden)

    Oscar David Pena Cabra

    2013-01-01

    Full Text Available Evaluation of balance control ability would become important in the rehabilitation training. In this paper, in order to make clear usefulness and limitation of a traditional simple inverted pendulum model in balance prediction in sit-to-stand movements, the traditional simple model was compared to an inertia (rotational radius variable inverted pendulum model including multiple-joint influence in the balance predictions. The predictions were tested upon experimentation with six healthy subjects. The evaluation showed that the multiple-joint influence model is more accurate in predicting balance under demanding sit-to-stand conditions. On the other hand, the evaluation also showed that the traditionally used simple inverted pendulum model is still reliable in predicting balance during sit-to-stand movement under non-demanding (normal condition. Especially, the simple model was shown to be effective for sit-to-stand movements with low center of mass velocity at the seat-off. Moreover, almost all trajectories under the normal condition seemed to follow the same control strategy, in which the subjects used extra energy than the minimum one necessary for standing up. This suggests that the safety considerations come first than the energy efficiency considerations during a sit to stand, since the most energy efficient trajectory is close to the backward fall boundary.

  16. Driving Green: Toward the Prediction and Influence of Efficient Driving Behavior

    Science.gov (United States)

    Newsome, William D.

    Sub-optimal efficiency in activities involving the consumption of fossil fuels, such as driving, contribute to a miscellany of negative environmental, political, economic and social externalities. Demonstrations of the effectiveness of feedback interventions can be found in countless organizational settings, as can demonstrations of individual differences in sensitivity to feedback interventions. Mechanisms providing feedback to drivers about fuel economy are becoming standard equipment in most new vehicles, but vary considerably in their constitution. A keystone of Radical Behaviorism is the acknowledgement that verbal behavior appears to play a role in mediating apparent susceptibility to influence by contingencies of varying delay. In the current study, samples of verbal behavior (rules) were collected in the context of a feedback intervention to improve driving efficiency. In an analysis of differences in individual responsiveness to the feedback intervention, the rate of novel rules per week generated by drivers is revealed to account for a substantial proportion of the variability in relative efficiency gains across participants. The predictive utility of conceptual tools, such as the basic distinction among contingency-shaped and rule governed behavior, the elaboration of direct-acting and indirect-acting contingencies, and the psychological flexibility model, is bolstered by these findings.

  17. Is the Langevin phase equation an efficient model for oscillating neurons?

    Science.gov (United States)

    Ota, Keisuke; Tsunoda, Takamasa; Omori, Toshiaki; Watanabe, Shigeo; Miyakawa, Hiroyoshi; Okada, Masato; Aonishi, Toru

    2009-12-01

    The Langevin phase model is an important canonical model for capturing coherent oscillations of neural populations. However, little attention has been given to verifying its applicability. In this paper, we demonstrate that the Langevin phase equation is an efficient model for neural oscillators by using the machine learning method in two steps: (a) Learning of the Langevin phase model. We estimated the parameters of the Langevin phase equation, i.e., a phase response curve and the intensity of white noise from physiological data measured in the hippocampal CA1 pyramidal neurons. (b) Test of the estimated model. We verified whether a Fokker-Planck equation derived from the Langevin phase equation with the estimated parameters could capture the stochastic oscillatory behavior of the same neurons disturbed by periodic perturbations. The estimated model could predict the neural behavior, so we can say that the Langevin phase equation is an efficient model for oscillating neurons.

  18. Is the Langevin phase equation an efficient model for oscillating neurons?

    International Nuclear Information System (INIS)

    Ota, Keisuke; Tsunoda, Takamasa; Aonishi, Toru; Omori, Toshiaki; Okada, Masato; Watanabe, Shigeo; Miyakawa, Hiroyoshi

    2009-01-01

    The Langevin phase model is an important canonical model for capturing coherent oscillations of neural populations. However, little attention has been given to verifying its applicability. In this paper, we demonstrate that the Langevin phase equation is an efficient model for neural oscillators by using the machine learning method in two steps: (a) Learning of the Langevin phase model. We estimated the parameters of the Langevin phase equation, i.e., a phase response curve and the intensity of white noise from physiological data measured in the hippocampal CA1 pyramidal neurons. (b) Test of the estimated model. We verified whether a Fokker-Planck equation derived from the Langevin phase equation with the estimated parameters could capture the stochastic oscillatory behavior of the same neurons disturbed by periodic perturbations. The estimated model could predict the neural behavior, so we can say that the Langevin phase equation is an efficient model for oscillating neurons.

  19. Hurst exponent and prediction based on weak-form efficient market hypothesis of stock markets

    Science.gov (United States)

    Eom, Cheoljun; Choi, Sunghoon; Oh, Gabjin; Jung, Woo-Sung

    2008-07-01

    We empirically investigated the relationships between the degree of efficiency and the predictability in financial time-series data. The Hurst exponent was used as the measurement of the degree of efficiency, and the hit rate calculated from the nearest-neighbor prediction method was used for the prediction of the directions of future price changes. We used 60 market indexes of various countries. We empirically discovered that the relationship between the degree of efficiency (the Hurst exponent) and the predictability (the hit rate) is strongly positive. That is, a market index with a higher Hurst exponent tends to have a higher hit rate. These results suggested that the Hurst exponent is useful for predicting future price changes. Furthermore, we also discovered that the Hurst exponent and the hit rate are useful as standards that can distinguish emerging capital markets from mature capital markets.

  20. 4K Video Traffic Prediction using Seasonal Autoregressive Modeling

    Directory of Open Access Journals (Sweden)

    D. R. Marković

    2017-06-01

    Full Text Available From the perspective of average viewer, high definition video streams such as HD (High Definition and UHD (Ultra HD are increasing their internet presence year over year. This is not surprising, having in mind expansion of HD streaming services, such as YouTube, Netflix etc. Therefore, high definition video streams are starting to challenge network resource allocation with their bandwidth requirements and statistical characteristics. Need for analysis and modeling of this demanding video traffic has essential importance for better quality of service and experience support. In this paper we use an easy-to-apply statistical model for prediction of 4K video traffic. Namely, seasonal autoregressive modeling is applied in prediction of 4K video traffic, encoded with HEVC (High Efficiency Video Coding. Analysis and modeling were performed within R programming environment using over 17.000 high definition video frames. It is shown that the proposed methodology provides good accuracy in high definition video traffic modeling.

  1. Efficient Calibration of Distributed Catchment Models Using Perceptual Understanding and Hydrologic Signatures

    Science.gov (United States)

    Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.

    2015-12-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.

  2. Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model

    Science.gov (United States)

    Wang, Qijie

    2015-08-01

    The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.

  3. An analytical model for droplet separation in vane separators and measurements of grade efficiency and pressure drop

    International Nuclear Information System (INIS)

    Koopman, Hans K.; Köksoy, Çağatay; Ertunç, Özgür; Lienhart, Hermann; Hedwig, Heinz; Delgado, Antonio

    2014-01-01

    Highlights: • An analytical model for efficiency is extended with additional geometrical features. • A simplified and a novel vane separator design are investigated experimentally. • Experimental results are significantly affected by re-entrainment effects. • Outlet droplet size spectra are accurately predicted by the model. • The improved grade efficiency doubles the pressure drop. - Abstract: This study investigates the predictive power of analytical models for the droplet separation efficiency of vane separators and compares experimental results of two different vane separator geometries. The ability to predict the separation efficiency of vane separators simplifies their design process, especially when analytical research allows the identification of the most important physical and geometrical parameters and can quantify their contribution. In this paper, an extension of a classical analytical model for separation efficiency is proposed that accounts for the contributions provided by straight wall sections. The extension of the analytical model is benchmarked against experiments performed by Leber (2003) on a single stage straight vane separator. The model is in very reasonable agreement with the experimental values. Results from the analytical model are also compared with experiments performed on a vane separator of simplified geometry (VS-1). The experimental separation efficiencies, computed from the measured liquid mass balances, are significantly below the model predictions, which lie arbitrarily close to unity. This difference is attributed to re-entrainment through film detachment from the last stage of the vane separators. After adjustment for re-entrainment effects, by applying a cut-off filter to the outlet droplet size spectra, the experimental and theoretical outlet Sauter mean diameters show very good agreement. A novel vane separator geometry of patented design (VS-2) is also investigated, comparing experimental results with VS-1

  4. Wake-Model Effects on Induced Drag Prediction of Staggered Boxwings

    Directory of Open Access Journals (Sweden)

    Julian Schirra

    2018-01-01

    Full Text Available For staggered boxwings the predictions of induced drag that rely on common potential-flow methods can be of limited accuracy. For example, linear, freestream-fixed wake models cannot resolve effects related to wake deflection and roll-up, which can have significant affects on the induced drag projection of these systems. The present work investigates the principle impact of wake modelling on the accuracy of induced drag prediction of boxwings with stagger. The study compares induced drag predictions of a higher-order potential-flow method that uses fixed and relaxed-wake models, and of an Euler-flow method. Positive-staggered systems at positive angles of attack are found to be particularly prone to higher-order wake effects due to vertical contraction of wakes trajectories, which results in smaller effective height-to-span ratios than compared with negative stagger and thus closer interactions between trailing wakes and lifting surfaces. Therefore, when trying to predict induced drag of positive staggered boxwings, only a potential-flow method with a fully relaxed-wake model will provide the high-degree of accuracy that rivals that of an Euler method while being computationally significantly more efficient.

  5. Prediction models and control algorithms for predictive applications of setback temperature in cooling systems

    International Nuclear Information System (INIS)

    Moon, Jin Woo; Yoon, Younju; Jeon, Young-Hoon; Kim, Sooyoung

    2017-01-01

    Highlights: • Initial ANN model was developed for predicting the time to the setback temperature. • Initial model was optimized for producing accurate output. • Optimized model proved its prediction accuracy. • ANN-based algorithms were developed and tested their performance. • ANN-based algorithms presented superior thermal comfort or energy efficiency. - Abstract: In this study, a temperature control algorithm was developed to apply a setback temperature predictively for the cooling system of a residential building during occupied periods by residents. An artificial neural network (ANN) model was developed to determine the required time for increasing the current indoor temperature to the setback temperature. This study involved three phases: development of the initial ANN-based prediction model, optimization and testing of the initial model, and development and testing of three control algorithms. The development and performance testing of the model and algorithm were conducted using TRNSYS and MATLAB. Through the development and optimization process, the final ANN model employed indoor temperature and the temperature difference between the current and target setback temperature as two input neurons. The optimal number of hidden layers, number of neurons, learning rate, and moment were determined to be 4, 9, 0.6, and 0.9, respectively. The tangent–sigmoid and pure-linear transfer function was used in the hidden and output neurons, respectively. The ANN model used 100 training data sets with sliding-window method for data management. Levenberg-Marquart training method was employed for model training. The optimized model had a prediction accuracy of 0.9097 root mean square errors when compared with the simulated results. Employing the ANN model, ANN-based algorithms maintained indoor temperatures better within target ranges. Compared to the conventional algorithm, the ANN-based algorithms reduced the duration of time, in which the indoor temperature

  6. Limits of Risk Predictability in a Cascading Alternating Renewal Process Model.

    Science.gov (United States)

    Lin, Xin; Moussawi, Alaa; Korniss, Gyorgy; Bakdash, Jonathan Z; Szymanski, Boleslaw K

    2017-07-27

    Most risk analysis models systematically underestimate the probability and impact of catastrophic events (e.g., economic crises, natural disasters, and terrorism) by not taking into account interconnectivity and interdependence of risks. To address this weakness, we propose the Cascading Alternating Renewal Process (CARP) to forecast interconnected global risks. However, assessments of the model's prediction precision are limited by lack of sufficient ground truth data. Here, we establish prediction precision as a function of input data size by using alternative long ground truth data generated by simulations of the CARP model with known parameters. We illustrate the approach on a model of fires in artificial cities assembled from basic city blocks with diverse housing. The results confirm that parameter recovery variance exhibits power law decay as a function of the length of available ground truth data. Using CARP, we also demonstrate estimation using a disparate dataset that also has dependencies: real-world prediction precision for the global risk model based on the World Economic Forum Global Risk Report. We conclude that the CARP model is an efficient method for predicting catastrophic cascading events with potential applications to emerging local and global interconnected risks.

  7. Building predictive models of soil particle-size distribution

    Directory of Open Access Journals (Sweden)

    Alessandro Samuel-Rosa

    2013-04-01

    Full Text Available Is it possible to build predictive models (PMs of soil particle-size distribution (psd in a region with complex geology and a young and unstable land-surface? The main objective of this study was to answer this question. A set of 339 soil samples from a small slope catchment in Southern Brazil was used to build PMs of psd in the surface soil layer. Multiple linear regression models were constructed using terrain attributes (elevation, slope, catchment area, convergence index, and topographic wetness index. The PMs explained more than half of the data variance. This performance is similar to (or even better than that of the conventional soil mapping approach. For some size fractions, the PM performance can reach 70 %. Largest uncertainties were observed in geologically more complex areas. Therefore, significant improvements in the predictions can only be achieved if accurate geological data is made available. Meanwhile, PMs built on terrain attributes are efficient in predicting the particle-size distribution (psd of soils in regions of complex geology.

  8. A system identification approach for developing model predictive controllers of antibody quality attributes in cell culture processes.

    Science.gov (United States)

    Downey, Brandon; Schmitt, John; Beller, Justin; Russell, Brian; Quach, Anthony; Hermann, Elizabeth; Lyon, David; Breit, Jeffrey

    2017-11-01

    As the biopharmaceutical industry evolves to include more diverse protein formats and processes, more robust control of Critical Quality Attributes (CQAs) is needed to maintain processing flexibility without compromising quality. Active control of CQAs has been demonstrated using model predictive control techniques, which allow development of processes which are robust against disturbances associated with raw material variability and other potentially flexible operating conditions. Wide adoption of model predictive control in biopharmaceutical cell culture processes has been hampered, however, in part due to the large amount of data and expertise required to make a predictive model of controlled CQAs, a requirement for model predictive control. Here we developed a highly automated, perfusion apparatus to systematically and efficiently generate predictive models using application of system identification approaches. We successfully created a predictive model of %galactosylation using data obtained by manipulating galactose concentration in the perfusion apparatus in serialized step change experiments. We then demonstrated the use of the model in a model predictive controller in a simulated control scenario to successfully achieve a %galactosylation set point in a simulated fed-batch culture. The automated model identification approach demonstrated here can potentially be generalized to many CQAs, and could be a more efficient, faster, and highly automated alternative to batch experiments for developing predictive models in cell culture processes, and allow the wider adoption of model predictive control in biopharmaceutical processes. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 33:1647-1661, 2017. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers.

  9. On a turbulent wall model to predict hemolysis numerically in medical devices

    Science.gov (United States)

    Lee, Seunghun; Chang, Minwook; Kang, Seongwon; Hur, Nahmkeon; Kim, Wonjung

    2017-11-01

    Analyzing degradation of red blood cells is very important for medical devices with blood flows. The blood shear stress has been recognized as the most dominant factor for hemolysis in medical devices. Compared to laminar flows, turbulent flows have higher shear stress values in the regions near the wall. In case of predicting hemolysis numerically, this phenomenon can require a very fine mesh and large computational resources. In order to resolve this issue, the purpose of this study is to develop a turbulent wall model to predict the hemolysis more efficiently. In order to decrease the numerical error of hemolysis prediction in a coarse grid resolution, we divided the computational domain into two regions and applied different approaches to each region. In the near-wall region with a steep velocity gradient, an analytic approach using modeled velocity profile is applied to reduce a numerical error to allow a coarse grid resolution. We adopt the Van Driest law as a model for the mean velocity profile. In a region far from the wall, a regular numerical discretization is applied. The proposed turbulent wall model is evaluated for a few turbulent flows inside a cannula and centrifugal pumps. The results present that the proposed turbulent wall model for hemolysis improves the computational efficiency significantly for engineering applications. Corresponding author.

  10. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  11. Prediction of greenhouse gas reduction potential in Japanese residential sector by residential energy end-use model

    International Nuclear Information System (INIS)

    Shimoda, Yoshiyuki; Yamaguchi, Yukio; Okamura, Tomo; Taniguchi, Ayako; Yamaguchi, Yohei

    2010-01-01

    A model is developed that simulates nationwide energy consumption of the residential sector by considering the diversity of household and building types. Since this model can simulate the energy consumption for each household and building category by dynamic energy use based on the schedule of the occupants' activities and a heating and cooling load calculation model, various kinds of energy-saving policies can be evaluated with considerable accuracy. In addition, the average energy efficiency of major electric appliances used in the residential sector and the percentages of housing insulation levels of existing houses is predicted by the 'stock transition model.' In this paper, energy consumption and CO 2 emissions in the Japanese residential sector until 2025 are predicted. For example, as a business - as-usual (BAU) case, CO 2 emissions will be reduced by 7% from the 1990 level. Also evaluated are mitigation measures such as the energy efficiency standard for home electric appliances, thermal insulation code, reduction of standby power, high-efficiency water heaters, energy-efficient behavior of occupants, and dissemination of photovoltaic panels.

  12. DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail; Soufan, Othman; Essack, Magbubah; Kalnis, Panos; Bajic, Vladimir B.

    2016-01-01

    DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery. DASPfind can be accessed online at: http://​www.​cbrc.​kaust.​edu.​sa/​daspfind.

  13. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  14. Genomic value prediction for quantitative traits under the epistatic model

    Directory of Open Access Journals (Sweden)

    Xu Shizhong

    2011-01-01

    Full Text Available Abstract Background Most quantitative traits are controlled by multiple quantitative trait loci (QTL. The contribution of each locus may be negligible but the collective contribution of all loci is usually significant. Genome selection that uses markers of the entire genome to predict the genomic values of individual plants or animals can be more efficient than selection on phenotypic values and pedigree information alone for genetic improvement. When a quantitative trait is contributed by epistatic effects, using all markers (main effects and marker pairs (epistatic effects to predict the genomic values of plants can achieve the maximum efficiency for genetic improvement. Results In this study, we created 126 recombinant inbred lines of soybean and genotyped 80 makers across the genome. We applied the genome selection technique to predict the genomic value of somatic embryo number (a quantitative trait for each line. Cross validation analysis showed that the squared correlation coefficient between the observed and predicted embryo numbers was 0.33 when only main (additive effects were used for prediction. When the interaction (epistatic effects were also included in the model, the squared correlation coefficient reached 0.78. Conclusions This study provided an excellent example for the application of genome selection to plant breeding.

  15. Improving actuation efficiency through variable recruitment hydraulic McKibben muscles: modeling, orderly recruitment control, and experiments.

    Science.gov (United States)

    Meller, Michael; Chipka, Jordan; Volkov, Alexander; Bryant, Matthew; Garcia, Ephrahim

    2016-11-03

    Hydraulic control systems have become increasingly popular as the means of actuation for human-scale legged robots and assistive devices. One of the biggest limitations to these systems is their run time untethered from a power source. One way to increase endurance is by improving actuation efficiency. We investigate reducing servovalve throttling losses by using a selective recruitment artificial muscle bundle comprised of three motor units. Each motor unit is made up of a pair of hydraulic McKibben muscles connected to one servovalve. The pressure and recruitment state of the artificial muscle bundle can be adjusted to match the load in an efficient manner, much like the firing rate and total number of recruited motor units is adjusted in skeletal muscle. A volume-based effective initial braid angle is used in the model of each recruitment level. This semi-empirical model is utilized to predict the efficiency gains of the proposed variable recruitment actuation scheme versus a throttling-only approach. A real-time orderly recruitment controller with pressure-based thresholds is developed. This controller is used to experimentally validate the model-predicted efficiency gains of recruitment on a robot arm. The results show that utilizing variable recruitment allows for much higher efficiencies over a broader operating envelope.

  16. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  17. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    Science.gov (United States)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  18. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  19. An Efficient Deterministic Approach to Model-based Prediction Uncertainty

    Data.gov (United States)

    National Aeronautics and Space Administration — Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the...

  20. Bayesian based Prognostic Model for Predictive Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud

    2017-01-01

    monitoring, fault detection and predictive maintenance of offshore wind components is defined. The diagnostic model defined in this paper is based on degradation, remaining useful lifetime and hybrid inspection threshold models. The defined degradation model is based on an exponential distribution......The operation and maintenance costs of offshore wind farms can be significantly reduced if existing corrective actions are performed as efficient as possible and if future corrective actions are avoided by performing sufficient preventive actions. In this paper a prognostic model for degradation...

  1. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  2. Feed Forward Artificial Neural Network Model to Estimate the TPH Removal Efficiency in Soil Washing Process

    Directory of Open Access Journals (Sweden)

    Hossein Jafari Mansoorian

    2017-01-01

    Full Text Available Background & Aims of the Study: A feed forward artificial neural network (FFANN was developed to predict the efficiency of total petroleum hydrocarbon (TPH removal from a contaminated soil, using soil washing process with Tween 80. The main objective of this study was to assess the performance of developed FFANN model for the estimation of   TPH removal. Materials and Methods: Several independent repressors including pH, shaking speed, surfactant concentration and contact time were used to describe the removal of TPH as a dependent variable in a FFANN model. 85% of data set observations were used for training the model and remaining 15% were used for model testing, approximately. The performance of the model was compared with linear regression and assessed, using Root of Mean Square Error (RMSE as goodness-of-fit measure Results: For the prediction of TPH removal efficiency, a FANN model with a three-hidden-layer structure of 4-3-1 and a learning rate of 0.01 showed the best predictive results. The RMSE and R2 for the training and testing steps of the model were obtained to be 2.596, 0.966, 10.70 and 0.78, respectively. Conclusion: For about 80% of the TPH removal efficiency can be described by the assessed regressors the developed model. Thus, focusing on the optimization of soil washing process regarding to shaking speed, contact time, surfactant concentration and pH can improve the TPH removal performance from polluted soils. The results of this study could be the basis for the application of FANN for the assessment of soil washing process and the control of petroleum hydrocarbon emission into the environments.

  3. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  4. A resource allocation model to support efficient air quality management in South Africa

    Directory of Open Access Journals (Sweden)

    U Govender

    2009-06-01

    Full Text Available Research into management interventions that create the required enabling environment for growth and development in South Africa are both timely and appropriate. In the research reported in this paper, the authors investigated the level of efficiency of the Air Quality Units within the three spheres of government viz. National, Provincial, and Local Departments of Environmental Management in South Africa, with the view to develop a resource allocation model. The inputs to the model were calculated from the actual man-hours spent on twelve selected activities relating to project management, knowledge management and change management. The outputs assessed were aligned to the requirements of the mandates of these Departments. Several models were explored using multiple regressions and stepwise techniques. The model that best explained the efficiency of the organisations from the input data was selected. Logistic regression analysis was identified as the most appropriate tool. This model is used to predict the required resources per Air Quality Unit in the different spheres of government in an attempt at supporting and empowering the air quality regime to achieve improved output efficiency.

  5. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    Science.gov (United States)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  6. Fast integration-based prediction bands for ordinary differential equation models.

    Science.gov (United States)

    Hass, Helge; Kreutz, Clemens; Timmer, Jens; Kaschek, Daniel

    2016-04-15

    To gain a deeper understanding of biological processes and their relevance in disease, mathematical models are built upon experimental data. Uncertainty in the data leads to uncertainties of the model's parameters and in turn to uncertainties of predictions. Mechanistic dynamic models of biochemical networks are frequently based on nonlinear differential equation systems and feature a large number of parameters, sparse observations of the model components and lack of information in the available data. Due to the curse of dimensionality, classical and sampling approaches propagating parameter uncertainties to predictions are hardly feasible and insufficient. However, for experimental design and to discriminate between competing models, prediction and confidence bands are essential. To circumvent the hurdles of the former methods, an approach to calculate a profile likelihood on arbitrary observations for a specific time point has been introduced, which provides accurate confidence and prediction intervals for nonlinear models and is computationally feasible for high-dimensional models. In this article, reliable and smooth point-wise prediction and confidence bands to assess the model's uncertainty on the whole time-course are achieved via explicit integration with elaborate correction mechanisms. The corresponding system of ordinary differential equations is derived and tested on three established models for cellular signalling. An efficiency analysis is performed to illustrate the computational benefit compared with repeated profile likelihood calculations at multiple time points. The integration framework and the examples used in this article are provided with the software package Data2Dynamics, which is based on MATLAB and freely available at http://www.data2dynamics.org helge.hass@fdm.uni-freiburg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e

  7. Modeling and prediction of human word search behavior in interactive machine translation

    Science.gov (United States)

    Ji, Duo; Yu, Bai; Ma, Bin; Ye, Na

    2017-12-01

    As a kind of computer aided translation method, Interactive Machine Translation technology reduced manual translation repetitive and mechanical operation through a variety of methods, so as to get the translation efficiency, and played an important role in the practical application of the translation work. In this paper, we regarded the behavior of users' frequently searching for words in the translation process as the research object, and transformed the behavior to the translation selection problem under the current translation. The paper presented a prediction model, which is a comprehensive utilization of alignment model, translation model and language model of the searching words behavior. It achieved a highly accurate prediction of searching words behavior, and reduced the switching of mouse and keyboard operations in the users' translation process.

  8. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites

    DEFF Research Database (Denmark)

    Zhou, Yanlian; Wu, Xiaocui; Ju, Weimin

    2015-01-01

    Light use efficiency (LUE) models are widely used to simulate gross primary production (GPP). However, the treatment of the plant canopy as a big leaf by these models can introduce large uncertainties in simulated GPP. Recently, a two-leaf light use efficiency (TL-LUE) model was developed...... to simulate GPP separately for sunlit and shaded leaves and has been shown to outperform the big-leaf MOD17 model at six FLUX sites in China. In this study we investigated the performance of the TL-LUE model for a wider range of biomes. For this we optimized the parameters and tested the TL-LUE model using...... data from 98 FLUXNET sites which are distributed across the globe. The results showed that the TL-LUE model performed in general better than the MOD17 model in simulating 8 day GPP. Optimized maximum light use efficiency of shaded leaves (epsilon(msh)) was 2.63 to 4.59 times that of sunlit leaves...

  9. Cell survival in carbon beams - comparison of amorphous track model predictions

    DEFF Research Database (Denmark)

    Grzanka, L.; Greilich, S.; Korcyl, M.

    Introduction: Predictions of the radiobiological effectiveness (RBE) play an essential role in treatment planning with heavy charged particles. Amorphous track models ( [1] , [2] , also referred to as track structure models) provide currently the most suitable description of cell survival under i....... Amorphous track modelling of luminescence detector efficiency in proton and carbon beams. 4.Tsuruoka C, Suzuki M, Kanai T, et al. LET and ion species dependence for cell killing in normal human skin fibroblasts. Radiat Res. 2005;163:494-500.......Introduction: Predictions of the radiobiological effectiveness (RBE) play an essential role in treatment planning with heavy charged particles. Amorphous track models ( [1] , [2] , also referred to as track structure models) provide currently the most suitable description of cell survival under ion....... [2] . In addition, a new approach based on microdosimetric distributions is presented and investigated [3] . Material and methods: A suitable software library embrasing the mentioned amorphous track models including numerous submodels with respect to delta-electron range models, radial dose...

  10. Novel Intermode Prediction Algorithm for High Efficiency Video Coding Encoder

    Directory of Open Access Journals (Sweden)

    Chan-seob Park

    2014-01-01

    Full Text Available The joint collaborative team on video coding (JCT-VC is developing the next-generation video coding standard which is called high efficiency video coding (HEVC. In the HEVC, there are three units in block structure: coding unit (CU, prediction unit (PU, and transform unit (TU. The CU is the basic unit of region splitting like macroblock (MB. Each CU performs recursive splitting into four blocks with equal size, starting from the tree block. In this paper, we propose a fast CU depth decision algorithm for HEVC technology to reduce its computational complexity. In 2N×2N PU, the proposed method compares the rate-distortion (RD cost and determines the depth using the compared information. Moreover, in order to speed up the encoding time, the efficient merge SKIP detection method is developed additionally based on the contextual mode information of neighboring CUs. Experimental result shows that the proposed algorithm achieves the average time-saving factor of 44.84% in the random access (RA at Main profile configuration with the HEVC test model (HM 10.0 reference software. Compared to HM 10.0 encoder, a small BD-bitrate loss of 0.17% is also observed without significant loss of image quality.

  11. Determination of a suitable set of loss models for centrifugal compressor performance prediction

    Directory of Open Access Journals (Sweden)

    Elkin I. GUTIÉRREZ VELÁSQUEZ

    2017-10-01

    Full Text Available Performance prediction in preliminary design stages of several turbomachinery components is a critical task in order to bring the design processes of these devices to a successful conclusion. In this paper, a review and analysis of the major loss mechanisms and loss models, used to determine the efficiency of a single stage centrifugal compressor, and a subsequent examination to determine an appropriate loss correlation set for estimating the isentropic efficiency in preliminary design stages of centrifugal compressors, were developed. Several semi-empirical correlations, commonly used to predict the efficiency of centrifugal compressors, were implemented in FORTRAN code and then were compared with experimental results in order to establish a loss correlation set to determine, with good approximation, the isentropic efficiency of single stage compressor. The aim of this study is to provide a suitable loss correlation set for determining the isentropic efficiency of a single stage centrifugal compressor, because, with a large amount of loss mechanisms and correlations available in the literature, it is difficult to ascertain how many and which correlations to employ for the correct prediction of the efficiency in the preliminary stage design of a centrifugal compressor. As a result of this study, a set of correlations composed by nine loss mechanisms for single stage centrifugal compressors, conformed by a rotor and a diffuser, are specified.

  12. Quantitative property-property relationship (QPPR) approach in predicting flotation efficiency of chelating agents as mineral collectors.

    Science.gov (United States)

    Natarajan, R; Nirdosh, I; Venuvanalingam, P; Ramalingam, M

    2002-07-01

    The QPPR approach has been used to model cupferrons as mineral collectors. Separation efficiencies (Es) of these chelating agents have been correlated with property parameters namely, log P, log Koc, substituent-constant sigma, Mullikan and ESP derived charges using multiple regression analysis. Es of substituted-cupferrons in the flotation of a uranium ore could be predicted within experimental error either by log P or log Koc and an electronic parameter. However, when a halo, methoxy or phenyl substituent was in para to the chelating group, experimental Es was greater than the predicted values. Inclusion of a Boolean type indicative parameter improved significantly the predictability power. This approach has been extended to 2-aminothiophenols that were used to float a zinc ore and the correlations were found to be reasonably good.

  13. Virtual-view PSNR prediction based on a depth distortion tolerance model and support vector machine.

    Science.gov (United States)

    Chen, Fen; Chen, Jiali; Peng, Zongju; Jiang, Gangyi; Yu, Mei; Chen, Hua; Jiao, Renzhi

    2017-10-20

    Quality prediction of virtual-views is important for free viewpoint video systems, and can be used as feedback to improve the performance of depth video coding and virtual-view rendering. In this paper, an efficient virtual-view peak signal to noise ratio (PSNR) prediction method is proposed. First, the effect of depth distortion on virtual-view quality is analyzed in detail, and a depth distortion tolerance (DDT) model that determines the DDT range is presented. Next, the DDT model is used to predict the virtual-view quality. Finally, a support vector machine (SVM) is utilized to train and obtain the virtual-view quality prediction model. Experimental results show that the Spearman's rank correlation coefficient and root mean square error between the actual PSNR and the predicted PSNR by DDT model are 0.8750 and 0.6137 on average, and by the SVM prediction model are 0.9109 and 0.5831. The computational complexity of the SVM method is lower than the DDT model and the state-of-the-art methods.

  14. Applying a Global Sensitivity Analysis Workflow to Improve the Computational Efficiencies in Physiologically-Based Pharmacokinetic Modeling

    Directory of Open Access Journals (Sweden)

    Nan-Hung Hsieh

    2018-06-01

    Full Text Available Traditionally, the solution to reduce parameter dimensionality in a physiologically-based pharmacokinetic (PBPK model is through expert judgment. However, this approach may lead to bias in parameter estimates and model predictions if important parameters are fixed at uncertain or inappropriate values. The purpose of this study was to explore the application of global sensitivity analysis (GSA to ascertain which parameters in the PBPK model are non-influential, and therefore can be assigned fixed values in Bayesian parameter estimation with minimal bias. We compared the elementary effect-based Morris method and three variance-based Sobol indices in their ability to distinguish “influential” parameters to be estimated and “non-influential” parameters to be fixed. We illustrated this approach using a published human PBPK model for acetaminophen (APAP and its two primary metabolites APAP-glucuronide and APAP-sulfate. We first applied GSA to the original published model, comparing Bayesian model calibration results using all the 21 originally calibrated model parameters (OMP, determined by “expert judgment”-based approach vs. the subset of original influential parameters (OIP, determined by GSA from the OMP. We then applied GSA to all the PBPK parameters, including those fixed in the published model, comparing the model calibration results using this full set of 58 model parameters (FMP vs. the full set influential parameters (FIP, determined by GSA from FMP. We also examined the impact of different cut-off points to distinguish the influential and non-influential parameters. We found that Sobol indices calculated by eFAST provided the best combination of reliability (consistency with other variance-based methods and efficiency (lowest computational cost to achieve convergence in identifying influential parameters. We identified several originally calibrated parameters that were not influential, and could be fixed to improve computational

  15. A Combined Cooperative Braking Model with a Predictive Control Strategy in an Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Hongqiang Guo

    2013-12-01

    Full Text Available Cooperative braking with regenerative braking and mechanical braking plays an important role in electric vehicles for energy-saving control. Based on the parallel and the series cooperative braking models, a combined model with a predictive control strategy to get a better cooperative braking performance is presented. The balance problem between the maximum regenerative energy recovery efficiency and the optimum braking stability is solved through an off-line process optimization stream with the collaborative optimization algorithm (CO. To carry out the process optimization stream, the optimal Latin hypercube design (Opt LHD is presented to discrete the continuous design space. To solve the poor real-time problem of the optimization, a high-precision predictive model based on the off-line optimization data of the combined model is built, and a predictive control strategy is proposed and verified through simulation. The simulation results demonstrate that the predictive control strategy and the combined model are reasonable and effective.

  16. Predictive modeling of coupled multi-physics systems: II. Illustrative application to reactor physics

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel; Badea, Madalina Corina

    2014-01-01

    Highlights: • We applied the PMCMPS methodology to a paradigm neutron diffusion model. • We underscore the main steps in applying PMCMPS to treat very large coupled systems. • PMCMPS reduces the uncertainties in the optimally predicted responses and model parameters. • PMCMPS is for sequentially treating coupled systems that cannot be treated simultaneously. - Abstract: This work presents paradigm applications to reactor physics of the innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS)” developed by Cacuci (2014). This methodology enables the assimilation of experimental and computational information and computes optimally predicted responses and model parameters with reduced predicted uncertainties, taking fully into account the coupling terms between the multi-physics systems, but using only the computational resources that would be needed to perform predictive modeling on each system separately. The paradigm examples presented in this work are based on a simple neutron diffusion model, chosen so as to enable closed-form solutions with clear physical interpretations. These paradigm examples also illustrate the computational efficiency of the PMCMPS, which enables the assimilation of additional experimental information, with a minimal increase in computational resources, to reduce the uncertainties in predicted responses and best-estimate values for uncertain model parameters, thus illustrating how very large systems can be treated without loss of information in a sequential rather than simultaneous manner

  17. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  18. Bioavailability of particulate metal to zebra mussels: Biodynamic modelling shows that assimilation efficiencies are site-specific

    Energy Technology Data Exchange (ETDEWEB)

    Bourgeault, Adeline, E-mail: bourgeault@ensil.unilim.fr [Cemagref, Unite de Recherche Hydrosystemes et Bioprocedes, 1 rue Pierre-Gilles de Gennes, 92761 Antony (France); FIRE, FR-3020, 4 place Jussieu, 75005 Paris (France); Gourlay-France, Catherine, E-mail: catherine.gourlay@cemagref.fr [Cemagref, Unite de Recherche Hydrosystemes et Bioprocedes, 1 rue Pierre-Gilles de Gennes, 92761 Antony (France); FIRE, FR-3020, 4 place Jussieu, 75005 Paris (France); Priadi, Cindy, E-mail: cindy.priadi@eng.ui.ac.id [LSCE/IPSL CEA-CNRS-UVSQ, Avenue de la Terrasse, 91198 Gif-sur-Yvette (France); Ayrault, Sophie, E-mail: Sophie.Ayrault@lsce.ipsl.fr [LSCE/IPSL CEA-CNRS-UVSQ, Avenue de la Terrasse, 91198 Gif-sur-Yvette (France); Tusseau-Vuillemin, Marie-Helene, E-mail: Marie-helene.tusseau@ifremer.fr [IFREMER Technopolis 40, 155 rue Jean-Jacques Rousseau, 92138 Issy-Les-Moulineaux (France)

    2011-12-15

    This study investigates the ability of the biodynamic model to predict the trophic bioaccumulation of cadmium (Cd), chromium (Cr), copper (Cu), nickel (Ni) and zinc (Zn) in a freshwater bivalve. Zebra mussels were transplanted to three sites along the Seine River (France) and collected monthly for 11 months. Measurements of the metal body burdens in mussels were compared with the predictions from the biodynamic model. The exchangeable fraction of metal particles did not account for the bioavailability of particulate metals, since it did not capture the differences between sites. The assimilation efficiency (AE) parameter is necessary to take into account biotic factors influencing particulate metal bioavailability. The biodynamic model, applied with AEs from the literature, overestimated the measured concentrations in zebra mussels, the extent of overestimation being site-specific. Therefore, an original methodology was proposed for in situ AE measurements for each site and metal. - Highlights: > Exchangeable fraction of metal particles did not account for the bioavailability of particulate metals. > Need for site-specific biodynamic parameters. > Field-determined AE provide a good fit between the biodynamic model predictions and bioaccumulation measurements. - The interpretation of metal bioaccumulation in transplanted zebra mussels with biodynamic modelling highlights the need for site-specific assimilation efficiencies of particulate metals.

  19. Automated Irrigation System using Weather Prediction for Efficient Usage of Water Resources

    Science.gov (United States)

    Susmitha, A.; Alakananda, T.; Apoorva, M. L.; Ramesh, T. K.

    2017-08-01

    In agriculture the major problem which farmers face is the water scarcity, so to improve the usage of water one of the irrigation system using drip irrigation which is implemented is “Automated irrigation system with partition facility for effective irrigation of small scale farms” (AISPF). But this method has some drawbacks which can be improved and here we are with a method called “Automated irrigation system using weather prediction for efficient usage of water resources’ (AISWP), it solves the shortcomings of AISPF process. AISWP method helps us to use the available water resources more efficiently by sensing the moisture present in the soil and apart from that it is actually predicting the weather by sensing two parameters temperature and humidity thereby processing the measured values through an algorithm and releasing the water accordingly which is an added feature of AISWP so that water can be efficiently used.

  20. Predictive event modelling in multicenter clinical trials with waiting time to response.

    Science.gov (United States)

    Anisimov, Vladimir V

    2011-01-01

    A new analytic statistical technique for predictive event modeling in ongoing multicenter clinical trials with waiting time to response is developed. It allows for the predictive mean and predictive bounds for the number of events to be constructed over time, accounting for the newly recruited patients and patients already at risk in the trial, and for different recruitment scenarios. For modeling patient recruitment, an advanced Poisson-gamma model is used, which accounts for the variation in recruitment over time, the variation in recruitment rates between different centers and the opening or closing of some centers in the future. A few models for event appearance allowing for 'recurrence', 'death' and 'lost-to-follow-up' events and using finite Markov chains in continuous time are considered. To predict the number of future events over time for an ongoing trial at some interim time, the parameters of the recruitment and event models are estimated using current data and then the predictive recruitment rates in each center are adjusted using individual data and Bayesian re-estimation. For a typical scenario (continue to recruit during some time interval, then stop recruitment and wait until a particular number of events happens), the closed-form expressions for the predictive mean and predictive bounds of the number of events at any future time point are derived under the assumptions of Markovian behavior of the event progression. The technique is efficiently applied to modeling different scenarios for some ongoing oncology trials. Case studies are considered. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Steam injection for heavy oil recovery: Modeling of wellbore heat efficiency and analysis of steam injection performance

    International Nuclear Information System (INIS)

    Gu, Hao; Cheng, Linsong; Huang, Shijun; Li, Bokai; Shen, Fei; Fang, Wenchao; Hu, Changhao

    2015-01-01

    Highlights: • A comprehensive mathematical model was established to estimate wellbore heat efficiency of steam injection wells. • A simplified approach of predicting steam pressure in wellbores was proposed. • High wellhead injection rate and wellhead steam quality can improve wellbore heat efficiency. • High wellbore heat efficiency does not necessarily mean good performance of heavy oil recovery. • Using excellent insulation materials is a good way to save water and fuels. - Abstract: The aims of this work are to present a comprehensive mathematical model for estimating wellbore heat efficiency and to analyze performance of steam injection for heavy oil recovery. In this paper, we firstly introduce steam injection process briefly. Secondly, a simplified approach of predicting steam pressure in wellbores is presented and a complete expression for steam quality is derived. More importantly, both direct and indirect methods are adopted to determine the wellbore heat efficiency. Then, the mathematical model is solved using an iterative technique. After the model is validated with measured field data, we study the effects of wellhead injection rate and wellhead steam quality on steam injection performance reflected in wellbores. Next, taking cyclic steam stimulation as an example, we analyze steam injection performance reflected in reservoirs with numerical reservoir simulation method. Finally, the significant role of improving wellbore heat efficiency in saving water and fuels is discussed in detail. The results indicate that we can improve the wellbore heat efficiency by enhancing wellhead injection rate or steam quality. However, high wellbore heat efficiency does not necessarily mean satisfactory steam injection performance reflected in reservoirs or good performance of heavy oil recovery. Moreover, the paper shows that using excellent insulation materials is a good way to save water and fuels due to enhancement of wellbore heat efficiency

  2. Valuing improvements in comfort from domestic energy-efficiency retrofits using a trade-off simulation model

    International Nuclear Information System (INIS)

    Clinch, J. Peter; Healy, John D.

    2003-01-01

    There are a number of stimuli behind energy efficiency, not least the Kyoto Protocol. The domestic sector has been highlighted as a key potential area. Improving energy efficiency in this sector also assists alleviating fuel poverty, for research is now demonstrating the strong relationship between poor domestic thermal efficiency, high fuel poverty and poor health and comfort status. Previous research has modelled the energy consumption and technical potential for energy saving resulting from energy-efficiency upgrades in this sector. However, there is virtually no work evaluating the economic benefit of improving households' thermal comfort post-retrofit. This paper does this for Ireland using a computer-simulation program. A dynamic modelling process is employed which projects into the future predicting the extent to which energy savings are forgone for improvements in comfort

  3. An efficient training scheme for supermodels

    Science.gov (United States)

    Schevenhoven, Francine J.; Selten, Frank M.

    2017-06-01

    Weather and climate models have improved steadily over time as witnessed by objective skill scores, although significant model errors remain. Given these imperfect models, predictions might be improved by combining them dynamically into a so-called supermodel. In this paper a new training scheme to construct such a supermodel is explored using a technique called cross pollination in time (CPT). In the CPT approach the models exchange states during the prediction. The number of possible predictions grows quickly with time, and a strategy to retain only a small number of predictions, called pruning, needs to be developed. The method is explored using low-order dynamical systems and applied to a global atmospheric model. The results indicate that the CPT training is efficient and leads to a supermodel with improved forecast quality as compared to the individual models. Due to its computational efficiency, the technique is suited for application to state-of-the art high-dimensional weather and climate models.

  4. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    Science.gov (United States)

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  5. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  6. Can We Predict Patient Wait Time?

    Science.gov (United States)

    Pianykh, Oleg S; Rosenthal, Daniel I

    2015-10-01

    The importance of patient wait-time management and predictability can hardly be overestimated: For most hospitals, it is the patient queues that drive and define every bit of clinical workflow. The objective of this work was to study the predictability of patient wait time and identify its most influential predictors. To solve this problem, we developed a comprehensive list of 25 wait-related parameters, suggested in earlier work and observed in our own experiments. All parameters were chosen as derivable from a typical Hospital Information System dataset. The parameters were fed into several time-predicting models, and the best parameter subsets, discovered through exhaustive model search, were applied to a large sample of actual patient wait data. We were able to discover the most efficient wait-time prediction factors and models, such as the line-size models introduced in this work. Moreover, these models proved to be equally accurate and computationally efficient. Finally, the selected models were implemented in our patient waiting areas, displaying predicted wait times on the monitors located at the front desks. The limitations of these models are also discussed. Optimal regression models based on wait-line sizes can provide accurate and efficient predictions for patient wait time. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  7. Efficient finite element modelling for the investigation of the dynamic behaviour of a structure with bolted joints

    Science.gov (United States)

    Omar, R.; Rani, M. N. Abdul; Yunus, M. A.; Mirza, W. I. I. Wan Iskandar; Zin, M. S. Mohd

    2018-04-01

    A simple structure with bolted joints consists of the structural components, bolts and nuts. There are several methods to model the structures with bolted joints, however there is no reliable, efficient and economic modelling methods that can accurately predict its dynamics behaviour. Explained in this paper is an investigation that was conducted to obtain an appropriate modelling method for bolted joints. This was carried out by evaluating four different finite element (FE) models of the assembled plates and bolts namely the solid plates-bolts model, plates without bolt model, hybrid plates-bolts model and simplified plates-bolts model. FE modal analysis was conducted for all four initial FE models of the bolted joints. Results of the FE modal analysis were compared with the experimental modal analysis (EMA) results. EMA was performed to extract the natural frequencies and mode shapes of the test physical structure with bolted joints. Evaluation was made by comparing the number of nodes, number of elements, elapsed computer processing unit (CPU) time, and the total percentage of errors of each initial FE model when compared with EMA result. The evaluation showed that the simplified plates-bolts model could most accurately predict the dynamic behaviour of the structure with bolted joints. This study proved that the reliable, efficient and economic modelling of bolted joints, mainly the representation of the bolting, has played a crucial element in ensuring the accuracy of the dynamic behaviour prediction.

  8. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    Science.gov (United States)

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines

  9. Incorporating photon recycling into the analytical drift-diffusion model of high efficiency solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Lumb, Matthew P. [The George Washington University, 2121 I Street NW, Washington, DC 20037 (United States); Naval Research Laboratory, Washington, DC 20375 (United States); Steiner, Myles A.; Geisz, John F. [National Renewable Energy Laboratory, Golden, Colorado 80401 (United States); Walters, Robert J. [Naval Research Laboratory, Washington, DC 20375 (United States)

    2014-11-21

    The analytical drift-diffusion formalism is able to accurately simulate a wide range of solar cell architectures and was recently extended to include those with back surface reflectors. However, as solar cells approach the limits of material quality, photon recycling effects become increasingly important in predicting the behavior of these cells. In particular, the minority carrier diffusion length is significantly affected by the photon recycling, with consequences for the solar cell performance. In this paper, we outline an approach to account for photon recycling in the analytical Hovel model and compare analytical model predictions to GaAs-based experimental devices operating close to the fundamental efficiency limit.

  10. Navigational efficiency in a biased and correlated random walk model of individual animal movement.

    Science.gov (United States)

    Bailey, Joseph D; Wallis, Jamie; Codling, Edward A

    2018-01-01

    Understanding how an individual animal is able to navigate through its environment is a key question in movement ecology that can give insight into observed movement patterns and the mechanisms behind them. Efficiency of navigation is important for behavioral processes at a range of different spatio-temporal scales, including foraging and migration. Random walk models provide a standard framework for modeling individual animal movement and navigation. Here we consider a vector-weighted biased and correlated random walk (BCRW) model for directed movement (taxis), where external navigation cues are balanced with forward persistence. We derive a mathematical approximation of the expected navigational efficiency for any BCRW of this form and confirm the model predictions using simulations. We demonstrate how the navigational efficiency is related to the weighting given to forward persistence and external navigation cues, and highlight the counter-intuitive result that for low (but realistic) levels of error on forward persistence, a higher navigational efficiency is achieved by giving more weighting to this indirect navigation cue rather than direct navigational cues. We discuss and interpret the relevance of these results for understanding animal movement and navigation strategies. © 2017 by the Ecological Society of America.

  11. A cluster expansion model for predicting activation barrier of atomic processes

    International Nuclear Information System (INIS)

    Rehman, Tafizur; Jaipal, M.; Chatterjee, Abhijit

    2013-01-01

    We introduce a procedure based on cluster expansion models for predicting the activation barrier of atomic processes encountered while studying the dynamics of a material system using the kinetic Monte Carlo (KMC) method. Starting with an interatomic potential description, a mathematical derivation is presented to show that the local environment dependence of the activation barrier can be captured using cluster interaction models. Next, we develop a systematic procedure for training the cluster interaction model on-the-fly, which involves: (i) obtaining activation barriers for handful local environments using nudged elastic band (NEB) calculations, (ii) identifying the local environment by analyzing the NEB results, and (iii) estimating the cluster interaction model parameters from the activation barrier data. Once a cluster expansion model has been trained, it is used to predict activation barriers without requiring any additional NEB calculations. Numerical studies are performed to validate the cluster expansion model by studying hop processes in Ag/Ag(100). We show that the use of cluster expansion model with KMC enables efficient generation of an accurate process rate catalog

  12. A Computationally-Efficient Numerical Model to Characterize the Noise Behavior of Metal-Framed Walls

    Directory of Open Access Journals (Sweden)

    Arun Arjunan

    2015-08-01

    Full Text Available Architects, designers, and engineers are making great efforts to design acoustically-efficient metal-framed walls, minimizing acoustic bridging. Therefore, efficient simulation models to predict the acoustic insulation complying with ISO 10140 are needed at a design stage. In order to achieve this, a numerical model consisting of two fluid-filled reverberation chambers, partitioned using a metal-framed wall, is to be simulated at one-third-octaves. This produces a large simulation model consisting of several millions of nodes and elements. Therefore, efficient meshing procedures are necessary to obtain better solution times and to effectively utilise computational resources. Such models should also demonstrate effective Fluid-Structure Interaction (FSI along with acoustic-fluid coupling to simulate a realistic scenario. In this contribution, the development of a finite element frequency-dependent mesh model that can characterize the sound insulation of metal-framed walls is presented. Preliminary results on the application of the proposed model to study the geometric contribution of stud frames on the overall acoustic performance of metal-framed walls are also presented. It is considered that the presented numerical model can be used to effectively visualize the noise behaviour of advanced materials and multi-material structures.

  13. A Scalable Version of the Navy Operational Global Atmospheric Prediction System Spectral Forecast Model

    Directory of Open Access Journals (Sweden)

    Thomas E. Rosmond

    2000-01-01

    Full Text Available The Navy Operational Global Atmospheric Prediction System (NOGAPS includes a state-of-the-art spectral forecast model similar to models run at several major operational numerical weather prediction (NWP centers around the world. The model, developed by the Naval Research Laboratory (NRL in Monterey, California, has run operational at the Fleet Numerical Meteorological and Oceanographic Center (FNMOC since 1982, and most recently is being run on a Cray C90 in a multi-tasked configuration. Typically the multi-tasked code runs on 10 to 15 processors with overall parallel efficiency of about 90%. resolution is T159L30, but other operational and research applications run at significantly lower resolutions. A scalable NOGAPS forecast model has been developed by NRL in anticipation of a FNMOC C90 replacement in about 2001, as well as for current NOGAPS research requirements to run on DOD High-Performance Computing (HPC scalable systems. The model is designed to run with message passing (MPI. Model design criteria include bit reproducibility for different processor numbers and reasonably efficient performance on fully shared memory, distributed memory, and distributed shared memory systems for a wide range of model resolutions. Results for a wide range of processor numbers, model resolutions, and different vendor architectures are presented. Single node performance has been disappointing on RISC based systems, at least compared to vector processor performance. This is a common complaint, and will require careful re-examination of traditional numerical weather prediction (NWP model software design and data organization to fully exploit future scalable architectures.

  14. Identifying influential data points in hydrological model calibration and their impact on streamflow predictions

    Science.gov (United States)

    Wright, David; Thyer, Mark; Westra, Seth

    2015-04-01

    Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this

  15. A Case Study Using Modeling and Simulation to Predict Logistics Supply Chain Issues

    Science.gov (United States)

    Tucker, David A.

    2007-01-01

    Optimization of critical supply chains to deliver thousands of parts, materials, sub-assemblies, and vehicle structures as needed is vital to the success of the Constellation Program. Thorough analysis needs to be performed on the integrated supply chain processes to plan, source, make, deliver, and return critical items efficiently. Process modeling provides simulation technology-based, predictive solutions for supply chain problems which enable decision makers to reduce costs, accelerate cycle time and improve business performance. For example, United Space Alliance, LLC utilized this approach in late 2006 to build simulation models that recreated shuttle orbiter thruster failures and predicted the potential impact of thruster removals on logistics spare assets. The main objective was the early identification of possible problems in providing thruster spares for the remainder of the Shuttle Flight Manifest. After extensive analysis the model results were used to quantify potential problems and led to improvement actions in the supply chain. Similarly the proper modeling and analysis of Constellation parts, materials, operations, and information flows will help ensure the efficiency of the critical logistics supply chains and the overall success of the program.

  16. Modeling international trends in energy efficiency

    International Nuclear Information System (INIS)

    Stern, David I.

    2012-01-01

    I use a stochastic production frontier to model energy efficiency trends in 85 countries over a 37-year period. Differences in energy efficiency across countries are modeled as a stochastic function of explanatory variables and I estimate the model using the cross-section of time-averaged data, so that no structure is imposed on technological change over time. Energy efficiency is measured using a new energy distance function approach. The country using the least energy per unit output, given its mix of outputs and inputs, defines the global production frontier. A country's relative energy efficiency is given by its distance from the frontier—the ratio of its actual energy use to the minimum required energy use, ceteris paribus. Energy efficiency is higher in countries with, inter alia, higher total factor productivity, undervalued currencies, and smaller fossil fuel reserves and it converges over time across countries. Globally, technological change was the most important factor counteracting the energy-use and carbon-emissions increasing effects of economic growth.

  17. An integrated numerical model for the prediction of Gaussian and billet shapes

    DEFF Research Database (Denmark)

    Hattel, Jesper; Pryds, Nini; Pedersen, Trine Bjerre

    2004-01-01

    Separate models for the atomisation and the deposition stages were recently integrated by the authors to form a unified model describing the entire spray-forming process. In the present paper, the focus is on describing the shape of the deposited material during the spray-forming process, obtained...... by this model. After a short review of the models and their coupling, the important factors which influence the resulting shape, i.e. Gaussian or billet, are addressed. The key parameters, which are utilized to predict the geometry and dimension of the deposited material, are the sticking efficiency...

  18. Supplementary Material for: DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail

    2016-01-01

    Abstract Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational methods for the accurate prediction of potential DTIs. To-date, many computational methods have been proposed for this purpose, but they suffer the drawback of a high rate of false positive predictions. Results Here, we developed a novel computational DTI prediction method, DASPfind. DASPfind uses simple paths of particular lengths inferred from a graph that describes DTIs, similarities between drugs, and similarities between the protein targets of drugs. We show that on average, over the four gold standard DTI datasets, DASPfind significantly outperforms other existing methods when the single top-ranked predictions are considered, resulting in 46.17 % of these predictions being correct, and it achieves 49.22 % correct single top ranked predictions when the set of all DTIs for a single drug is tested. Furthermore, we demonstrate that our method is best suited for predicting DTIs in cases of drugs with no known targets or with few known targets. We also show the practical use of DASPfind by generating novel predictions for the Ion Channel dataset and validating them manually. Conclusions DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery

  19. Multi-model analysis in hydrological prediction

    Science.gov (United States)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  20. A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins

    Science.gov (United States)

    Gronewold, A.; Alameddine, I.; Anderson, R. M.

    2009-12-01

    Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predicting flow from ungauged basins. In particular, these approaches allow for predicting flows under uncertain and potentially variable future conditions due to rapid land cover changes, variable climate conditions, and other factors. Despite the broad range of literature on estimating rainfall-runoff model parameters, however, the absence of a robust set of modeling tools for identifying and quantifying uncertainties in (and correlation between) rainfall-runoff model parameters represents a significant gap in current hydrological modeling research. Here, we build upon a series of recent publications promoting novel Bayesian and probabilistic modeling strategies for quantifying rainfall-runoff model parameter estimation uncertainty. Our approach applies alternative measures of rainfall-runoff model parameter joint likelihood (including Nash-Sutcliffe efficiency, among others) to simulate samples from the joint parameter posterior probability density function. We then use these correlated samples as response variables in a Bayesian hierarchical model with land use coverage data as predictor variables in order to develop a robust land use-based tool for forecasting flow in ungauged basins while accounting for, and explicitly acknowledging, parameter estimation uncertainty. We apply this modeling strategy to low-relief coastal watersheds of Eastern North Carolina, an area representative of coastal resource waters throughout the world because of its sensitive embayments and because of the abundant (but currently threatened) natural resources it hosts. Consequently, this area is the subject of several ongoing studies and large-scale planning initiatives, including those conducted through the United

  1. Predictive simulation of bidirectional Glenn shunt using a hybrid blood vessel model.

    Science.gov (United States)

    Li, Hao; Leow, Wee Kheng; Chiu, Ing-Sh

    2009-01-01

    This paper proposes a method for performing predictive simulation of cardiac surgery. It applies a hybrid approach to model the deformation of blood vessels. The hybrid blood vessel model consists of a reference Cosserat rod and a surface mesh. The reference Cosserat rod models the blood vessel's global bending, stretching, twisting and shearing in a physically correct manner, and the surface mesh models the surface details of the blood vessel. In this way, the deformation of blood vessels can be computed efficiently and accurately. Our predictive simulation system can produce complex surgical results given a small amount of user inputs. It allows the surgeon to easily explore various surgical options and evaluate them. Tests of the system using bidirectional Glenn shunt (BDG) as an application example show that the results produc by the system are similar to real surgical results.

  2. Unified model to predict flexural shear behavior of externally bonded RC beams

    International Nuclear Information System (INIS)

    Colotti, V.; Spadea, G.; Swamy, R.N.

    2006-01-01

    Structural strengthening with externally bonded reinforcement is now recognized as a cost-effective, structurally sound and practically efficient method of rehabilitating deteriorating and damaged reinforced concrete beams. There is now an urgent need to develop a sound engineering basis which can predict the failure loads of all such strengthened beams in a reliable and consistent manner. Existing models to predict the behavior at ultimate of strengthened beams suffer from many limitations and weaknesses. This paper presents a unified global model, based on the Strut-and-Tie approach, to predict the failure loads of reinforced concrete beams strengthened for flexure and/or shear. This structural model is based on rational engineering principles, considers all the possible failure modes, and incorporates the load transfer mechanism bond to reflect the debonding phenomena which has a dominant influence on the failure process of plated beams. The model is validated against about 200 strengthened beam test reported in the literature and failing in flexure and/or shear, involving a large number of structural variables and steel, carbon and glass fiber reinforced polymer laminates as reinforcing medium. (author)

  3. Power maximization of a point absorber wave energy converter using improved model predictive control

    Science.gov (United States)

    Milani, Farideh; Moghaddam, Reihaneh Kardehi

    2017-08-01

    This paper considers controlling and maximizing the absorbed power of wave energy converters for irregular waves. With respect to physical constraints of the system, a model predictive control is applied. Irregular waves' behavior is predicted by Kalman filter method. Owing to the great influence of controller parameters on the absorbed power, these parameters are optimized by imperialist competitive algorithm. The results illustrate the method's efficiency in maximizing the extracted power in the presence of unknown excitation force which should be predicted by Kalman filter.

  4. Efficiency of economic development models

    Directory of Open Access Journals (Sweden)

    Oana Camelia Iacob

    2013-03-01

    Full Text Available The world economy is becoming increasingly integrated. Integrating emerging economies of Asia, such as China and India increase competition on the world stage, putting pressure on the "actors" already existing. These developments have raised questions about the effectiveness of European development model, which focuses on a high level of equity, insurance and social protection. According to analysts, the world today faces three models of economic development with significant weight in the world: the European, American and Asian. This study will focus on analyzing European development model, and a brief comparison with the United States. In addition, this study aims to highlight the relationship between efficiency and social equity that occurs in each submodel in part of the European model, given that social and economic performance in the EU are not homogeneous. To achieve this, it is necessary to analyze different indicators related to social equity and efficiency respectively, to observe the performance of each submodel individually. The article analyzes data to determine submodel performance according to social equity and economic efficiency.

  5. Modeling the evolution of channel shape: Balancing computational efficiency with hydraulic fidelity

    Science.gov (United States)

    Wobus, C.W.; Kean, J.W.; Tucker, G.E.; Anderson, R. Scott

    2008-01-01

    The cross-sectional shape of a natural river channel controls the capacity of the system to carry water off a landscape, to convey sediment derived from hillslopes, and to erode its bed and banks. Numerical models that describe the response of a landscape to changes in climate or tectonics therefore require formulations that can accommodate evolution of channel cross-sectional geometry. However, fully two-dimensional (2-D) flow models are too computationally expensive to implement in large-scale landscape evolution models, while available simple empirical relationships between width and discharge do not adequately capture the dynamics of channel adjustment. We have developed a simplified 2-D numerical model of channel evolution in a cohesive, detachment-limited substrate subject to steady, unidirectional flow. Erosion is assumed to be proportional to boundary shear stress, which is calculated using an approximation of the flow field in which log-velocity profiles are assumed to apply along vectors that are perpendicular to the local channel bed. Model predictions of the velocity structure, peak boundary shear stress, and equilibrium channel shape compare well with predictions of a more sophisticated but more computationally demanding ray-isovel model. For example, the mean velocities computed by the two models are consistent to within ???3%, and the predicted peak shear stress is consistent to within ???7%. Furthermore, the shear stress distributions predicted by our model compare favorably with available laboratory measurements for prescribed channel shapes. A modification to our simplified code in which the flow includes a high-velocity core allows the model to be extended to estimate shear stress distributions in channels with large width-to-depth ratios. Our model is efficient enough to incorporate into large-scale landscape evolution codes and can be used to examine how channels adjust both cross-sectional shape and slope in response to tectonic and climatic

  6. An efficiency correction model

    NARCIS (Netherlands)

    Francke, M.K.; de Vos, A.F.

    2009-01-01

    We analyze a dataset containing costs and outputs of 67 American local exchange carriers in a period of 11 years. This data has been used to judge the efficiency of BT and KPN using static stochastic frontier models. We show that these models are dynamically misspecified. As an alternative we

  7. Development of an empirical model of turbine efficiency using the Taylor expansion and regression analysis

    International Nuclear Information System (INIS)

    Fang, Xiande; Xu, Yu

    2011-01-01

    The empirical model of turbine efficiency is necessary for the control- and/or diagnosis-oriented simulation and useful for the simulation and analysis of dynamic performances of the turbine equipment and systems, such as air cycle refrigeration systems, power plants, turbine engines, and turbochargers. Existing empirical models of turbine efficiency are insufficient because there is no suitable form available for air cycle refrigeration turbines. This work performs a critical review of empirical models (called mean value models in some literature) of turbine efficiency and develops an empirical model in the desired form for air cycle refrigeration, the dominant cooling approach in aircraft environmental control systems. The Taylor series and regression analysis are used to build the model, with the Taylor series being used to expand functions with the polytropic exponent and the regression analysis to finalize the model. The measured data of a turbocharger turbine and two air cycle refrigeration turbines are used for the regression analysis. The proposed model is compact and able to present the turbine efficiency map. Its predictions agree with the measured data very well, with the corrected coefficient of determination R c 2 ≥ 0.96 and the mean absolute percentage deviation = 1.19% for the three turbines. -- Highlights: → Performed a critical review of empirical models of turbine efficiency. → Developed an empirical model in the desired form for air cycle refrigeration, using the Taylor expansion and regression analysis. → Verified the method for developing the empirical model. → Verified the model.

  8. A model for efficient management of electrical assets

    International Nuclear Information System (INIS)

    Alonso Guerreiro, A.

    2008-01-01

    At the same time that energy demand grows faster than the investments in electrical installations, the older capacity is reaching the end of its useful life. The need of running all those capacity without interruptions and an efficient maintenance of its assets, are the two current key points for power generation, transmission and distribution systems. This paper tries to show the reader a model of management which makes possible an effective management of assets with a strict control cost, and which includes those key points, centred at predictive techniques, involving all the departments of the organization and which goes further on considering the maintenance like a simple reparation or substitution of broken down units. Therefore, it becomes precise a model with three basic lines: supply guarantee, quality service and competitively, in order to allow the companies to reach the current demands which characterize the power supply. (Author) 5 refs

  9. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  10. Stock management in hospital pharmacy using chance-constrained model predictive control.

    Science.gov (United States)

    Jurado, I; Maestre, J M; Velarde, P; Ocampo-Martinez, C; Fernández, I; Tejera, B Isla; Prado, J R Del

    2016-05-01

    One of the most important problems in the pharmacy department of a hospital is stock management. The clinical need for drugs must be satisfied with limited work labor while minimizing the use of economic resources. The complexity of the problem resides in the random nature of the drug demand and the multiple constraints that must be taken into account in every decision. In this article, chance-constrained model predictive control is proposed to deal with this problem. The flexibility of model predictive control allows taking into account explicitly the different objectives and constraints involved in the problem while the use of chance constraints provides a trade-off between conservativeness and efficiency. The solution proposed is assessed to study its implementation in two Spanish hospitals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  12. A Riccati Based Homogeneous and Self-Dual Interior-Point Method for Linear Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Edlund, Kristian

    2013-01-01

    In this paper, we develop an efficient interior-point method (IPM) for the linear programs arising in economic model predictive control of linear systems. The novelty of our algorithm is that it combines a homogeneous and self-dual model, and a specialized Riccati iteration procedure. We test...

  13. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Evaluation of intradural stimulation efficiency and selectivity in a computational model of spinal cord stimulation.

    Directory of Open Access Journals (Sweden)

    Bryan Howell

    Full Text Available Spinal cord stimulation (SCS is an alternative or adjunct therapy to treat chronic pain, a prevalent and clinically challenging condition. Although SCS has substantial clinical success, the therapy is still prone to failures, including lead breakage, lead migration, and poor pain relief. The goal of this study was to develop a computational model of SCS and use the model to compare activation of neural elements during intradural and extradural electrode placement. We constructed five patient-specific models of SCS. Stimulation thresholds predicted by the model were compared to stimulation thresholds measured intraoperatively, and we used these models to quantify the efficiency and selectivity of intradural and extradural SCS. Intradural placement dramatically increased stimulation efficiency and reduced the power required to stimulate the dorsal columns by more than 90%. Intradural placement also increased selectivity, allowing activation of a greater proportion of dorsal column fibers before spread of activation to dorsal root fibers, as well as more selective activation of individual dermatomes at different lateral deviations from the midline. Further, the results suggest that current electrode designs used for extradural SCS are not optimal for intradural SCS, and a novel azimuthal tripolar design increased stimulation selectivity, even beyond that achieved with an intradural paddle array. Increased stimulation efficiency is expected to increase the battery life of implantable pulse generators, increase the recharge interval of rechargeable implantable pulse generators, and potentially reduce stimulator volume. The greater selectivity of intradural stimulation may improve the success rate of SCS by mitigating the sensitivity of pain relief to malpositioning of the electrode. The outcome of this effort is a better quantitative understanding of how intradural electrode placement can potentially increase the selectivity and efficiency of SCS

  15. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  16. A carbon risk prediction model for Chinese heavy-polluting industrial enterprises based on support vector machine

    International Nuclear Information System (INIS)

    Zhou, Zhifang; Xiao, Tian; Chen, Xiaohong; Wang, Chang

    2016-01-01

    Chinese heavy-polluting industrial enterprises, especially petrochemical or chemical industry, labeled low carbon efficiency and high emission load, are facing the tremendous pressure of emission reduction under the background of global shortage of energy supply and constrain of carbon emission. However, due to the limited amount of theoretic and practical research in this field, problems like lacking prediction indicators or models, and the quantified standard of carbon risk remain unsolved. In this paper, the connotation of carbon risk and an assessment index system for Chinese heavy-polluting industrial enterprises (eg. coal enterprise, petrochemical enterprises, chemical enterprises et al.) based on support vector machine are presented. By using several heavy-polluting industrial enterprises’ related data, SVM model is trained to predict the carbon risk level of a specific enterprise, which allows the enterprise to identify and manage its carbon risks. The result shows that this method can predict enterprise’s carbon risk level in an efficient, accurate way with high practical application and generalization value.

  17. Predicting High or Low Transfer Efficiency of Photovoltaic Systems Using a Novel Hybrid Methodology Combining Rough Set Theory, Data Envelopment Analysis and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lee-Ing Tong

    2012-02-01

    Full Text Available Solar energy has become an important energy source in recent years as it generates less pollution than other energies. A photovoltaic (PV system, which typically has many components, converts solar energy into electrical energy. With the development of advanced engineering technologies, the transfer efficiency of a PV system has been increased from low to high. The combination of components in a PV system influences its transfer efficiency. Therefore, when predicting the transfer efficiency of a PV system, one must consider the relationship among system components. This work accurately predicts whether transfer efficiency of a PV system is high or low using a novel hybrid model that combines rough set theory (RST, data envelopment analysis (DEA, and genetic programming (GP. Finally, real data-set are utilized to demonstrate the accuracy of the proposed method.

  18. Temperature modelling and prediction for activated sludge systems.

    Science.gov (United States)

    Lippi, S; Rosso, D; Lubello, C; Canziani, R; Stenstrom, M K

    2009-01-01

    Temperature is an important factor affecting biomass activity, which is critical to maintain efficient biological wastewater treatment, and also physiochemical properties of mixed liquor as dissolved oxygen saturation and settling velocity. Controlling temperature is not normally possible for treatment systems but incorporating factors impacting temperature in the design process, such as aeration system, surface to volume ratio, and tank geometry can reduce the range of temperature extremes and improve the overall process performance. Determining how much these design or up-grade options affect the tank temperature requires a temperature model that can be used with existing design methodologies. This paper presents a new steady state temperature model developed by incorporating the best aspects of previously published models, introducing new functions for selected heat exchange paths and improving the method for predicting the effects of covering aeration tanks. Numerical improvements with embedded reference data provide simpler formulation, faster execution, easier sensitivity analyses, using an ordinary spreadsheet. The paper presents several cases to validate the model.

  19. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Eric R. Edelman

    2017-06-01

    Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related

  20. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling.

    Science.gov (United States)

    Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A

    2017-01-01

    For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits.

  1. Extracting falsifiable predictions from sloppy models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  2. Highly predictive hologram QSAR models of nitrile-containing cruzain inhibitors.

    Science.gov (United States)

    Silva, Daniel Gedder; Rocha, Josmar Rodrigues; Sartori, Geraldo Rodrigues; Montanari, Carlos Alberto

    2017-11-01

    The HQSAR, molecular docking, and ROCS were applied to a data-set of 57 cruzain inhibitors. The best HQSAR model (q 2  = .70, r 2  = .95, [Formula: see text] = .62, [Formula: see text] = .09 and [Formula: see text] = .26), employing well-balanced, diverse training (40) and test (17) sets, was obtained using atoms (A), bonds (B), and hydrogen (H) as fragment distinctions and 6-9 as fragment sizes. This model was then used to predict the unknown potencies of 121 compounds (the V1 database), giving rise to a satisfactory predictive r 2 value of .65 (external validation). By employing an extra external data-set comprising 1223 compounds (the V3 database) either retrieved from the ChEMBL or CDD databases, an overall ROC AUC score well over .70 was obtained. The contribution maps obtained with the best HQSAR model (model 3.4) are in agreement with the predicted binding mode and with the biological potencies of the studied compounds. We also screened these compounds using the ROCS method, a Gaussian-shape volume filter able to identify quickly the shapes that match a query molecule. The area under the curve (AUC) obtained with the ROC curves (ROC AUC) was .72, indicating that the method was very efficient in distinguishing between active and inactive cruzain inhibitors. These set of information guided us to propose novel cruzain inhibitors to be synthesized. Then, the best HQSAR model obtained was used to predict the pIC 50 values of these new compounds. Some compounds identified using this method have shown calculated potencies higher than those which have originated them.

  3. Efficient and robust estimation for longitudinal mixed models for binary data

    DEFF Research Database (Denmark)

    Holst, René

    2009-01-01

    This paper proposes a longitudinal mixed model for binary data. The model extends the classical Poisson trick, in which a binomial regression is fitted by switching to a Poisson framework. A recent estimating equations method for generalized linear longitudinal mixed models, called GEEP, is used...... as a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson-type estimating...... equations, using second moments only. Random effects are predicted by BLUPs. The method provides a computationally efficient and robust approach to the estimation of longitudinal clustered binary data and accommodates linear and non-linear models. A simulation study is used for validation and finally...

  4. The viscoelastic standard nonlinear solid model: predicting the response of the lumbar intervertebral disk to low-frequency vibrations.

    Science.gov (United States)

    Groth, Kevin M; Granata, Kevin P

    2008-06-01

    Due to the mathematical complexity of current musculoskeletal spine models, there is a need for computationally efficient models of the intervertebral disk (IVD). The aim of this study is to develop a mathematical model that will adequately describe the motion of the IVD under axial cyclic loading as well as maintain computational efficiency for use in future musculoskeletal spine models. Several studies have successfully modeled the creep characteristics of the IVD using the three-parameter viscoelastic standard linear solid (SLS) model. However, when the SLS model is subjected to cyclic loading, it underestimates the load relaxation, the cyclic modulus, and the hysteresis of the human lumbar IVD. A viscoelastic standard nonlinear solid (SNS) model was used to predict the response of the human lumbar IVD subjected to low-frequency vibration. Nonlinear behavior of the SNS model was simulated by a strain-dependent elastic modulus on the SLS model. Parameters of the SNS model were estimated from experimental load deformation and stress-relaxation curves obtained from the literature. The SNS model was able to predict the cyclic modulus of the IVD at frequencies of 0.01 Hz, 0.1 Hz, and 1 Hz. Furthermore, the SNS model was able to quantitatively predict the load relaxation at a frequency of 0.01 Hz. However, model performance was unsatisfactory when predicting load relaxation and hysteresis at higher frequencies (0.1 Hz and 1 Hz). The SLS model of the lumbar IVD may require strain-dependent elastic and viscous behavior to represent the dynamic response to compressive strain.

  5. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  6. A Subject-Specific Kinematic Model to Predict Human Motion in Exoskeleton-Assisted Gait

    Science.gov (United States)

    Torricelli, Diego; Cortés, Camilo; Lete, Nerea; Bertelsen, Álvaro; Gonzalez-Vargas, Jose E.; del-Ama, Antonio J.; Dimbwadyo, Iris; Moreno, Juan C.; Florez, Julian; Pons, Jose L.

    2018-01-01

    The relative motion between human and exoskeleton is a crucial factor that has remarkable consequences on the efficiency, reliability and safety of human-robot interaction. Unfortunately, its quantitative assessment has been largely overlooked in the literature. Here, we present a methodology that allows predicting the motion of the human joints from the knowledge of the angular motion of the exoskeleton frame. Our method combines a subject-specific skeletal model with a kinematic model of a lower limb exoskeleton (H2, Technaid), imposing specific kinematic constraints between them. To calibrate the model and validate its ability to predict the relative motion in a subject-specific way, we performed experiments on seven healthy subjects during treadmill walking tasks. We demonstrate a prediction accuracy lower than 3.5° globally, and around 1.5° at the hip level, which represent an improvement up to 66% compared to the traditional approach assuming no relative motion between the user and the exoskeleton. PMID:29755336

  7. A Subject-Specific Kinematic Model to Predict Human Motion in Exoskeleton-Assisted Gait.

    Science.gov (United States)

    Torricelli, Diego; Cortés, Camilo; Lete, Nerea; Bertelsen, Álvaro; Gonzalez-Vargas, Jose E; Del-Ama, Antonio J; Dimbwadyo, Iris; Moreno, Juan C; Florez, Julian; Pons, Jose L

    2018-01-01

    The relative motion between human and exoskeleton is a crucial factor that has remarkable consequences on the efficiency, reliability and safety of human-robot interaction. Unfortunately, its quantitative assessment has been largely overlooked in the literature. Here, we present a methodology that allows predicting the motion of the human joints from the knowledge of the angular motion of the exoskeleton frame. Our method combines a subject-specific skeletal model with a kinematic model of a lower limb exoskeleton (H2, Technaid), imposing specific kinematic constraints between them. To calibrate the model and validate its ability to predict the relative motion in a subject-specific way, we performed experiments on seven healthy subjects during treadmill walking tasks. We demonstrate a prediction accuracy lower than 3.5° globally, and around 1.5° at the hip level, which represent an improvement up to 66% compared to the traditional approach assuming no relative motion between the user and the exoskeleton.

  8. Model-predictive control based on Takagi-Sugeno fuzzy model for electrical vehicles delayed model

    DEFF Research Database (Denmark)

    Khooban, Mohammad-Hassan; Vafamand, Navid; Niknam, Taher

    2017-01-01

    Electric vehicles (EVs) play a significant role in different applications, such as commuter vehicles and short distance transport applications. This study presents a new structure of model-predictive control based on the Takagi-Sugeno fuzzy model, linear matrix inequalities, and a non......-quadratic Lyapunov function for the speed control of EVs including time-delay states and parameter uncertainty. Experimental data, using the Federal Test Procedure (FTP-75), is applied to test the performance and robustness of the suggested controller in the presence of time-varying parameters. Besides, a comparison...... is made between the results of the suggested robust strategy and those obtained from some of the most recent studies on the same topic, to assess the efficiency of the suggested controller. Finally, the experimental results based on a TMS320F28335 DSP are performed on a direct current motor. Simulation...

  9. Explicit Nonlinear Model Predictive Control Theory and Applications

    CERN Document Server

    Grancharova, Alexandra

    2012-01-01

    Nonlinear Model Predictive Control (NMPC) has become the accepted methodology to solve complex control problems related to process industries. The main motivation behind explicit NMPC is that an explicit state feedback law avoids the need for executing a numerical optimization algorithm in real time. The benefits of an explicit solution, in addition to the efficient on-line computations, include also verifiability of the implementation and the possibility to design embedded control systems with low software and hardware complexity. This book considers the multi-parametric Nonlinear Programming (mp-NLP) approaches to explicit approximate NMPC of constrained nonlinear systems, developed by the authors, as well as their applications to various NMPC problem formulations and several case studies. The following types of nonlinear systems are considered, resulting in different NMPC problem formulations: Ø  Nonlinear systems described by first-principles models and nonlinear systems described by black-box models; �...

  10. Novel Predictive Model of the Debonding Strength for Masonry Members Retrofitted with FRP

    Directory of Open Access Journals (Sweden)

    Iman Mansouri

    2016-11-01

    Full Text Available Strengthening of masonry members using externally bonded (EB fiber-reinforced polymer (FRP composites has become a famous structural strengthening method over the past decade due to the popular advantages of FRP composites, including their high strength-to-weight ratio and excellent corrosion resistance. In this study, gene expression programming (GEP, as a novel tool, has been used to predict the debonding strength of retrofitted masonry members. The predictions of the new debonding resistance model, as well as several other models, are evaluated by comparing their estimates with experimental results of a large test database. The results indicate that the new model has the best efficiency among the models examined and represents an improvement to other models. The root mean square errors (RMSE of the best empirical Kashyap model in training and test data were, respectively, reduced by 51.7% and 41.3% using the GEP model in estimating debonding strength.

  11. Hourly predictive Levenberg-Marquardt ANN and multi linear regression models for predicting of dew point temperature

    Science.gov (United States)

    Zounemat-Kermani, Mohammad

    2012-08-01

    In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.

  12. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  13. High Precision Clock Bias Prediction Model in Clock Synchronization System

    Directory of Open Access Journals (Sweden)

    Zan Liu

    2016-01-01

    Full Text Available Time synchronization is a fundamental requirement for many services provided by a distributed system. Clock calibration through the time signal is the usual way to realize the synchronization among the clocks used in the distributed system. The interference to time signal transmission or equipment failures may bring about failure to synchronize the time. To solve this problem, a clock bias prediction module is paralleled in the clock calibration system. And for improving the precision of clock bias prediction, the first-order grey model with one variable (GM(1,1 model is proposed. In the traditional GM(1,1 model, the combination of parameters determined by least squares criterion is not optimal; therefore, the particle swarm optimization (PSO is used to optimize GM(1,1 model. At the same time, in order to avoid PSO getting stuck at local optimization and improve its efficiency, the mechanisms that double subgroups and nonlinear decreasing inertia weight are proposed. In order to test the precision of the improved model, we design clock calibration experiments, where time signal is transferred via radio and wired channel, respectively. The improved model is built on the basis of clock bias acquired in the experiments. The results show that the improved model is superior to other models both in precision and in stability. The precision of improved model increased by 66.4%~76.7%.

  14. Modeling of alpha mass-efficiency curve

    International Nuclear Information System (INIS)

    Semkow, T.M.; Jeter, H.W.; Parsa, B.; Parekh, P.P.; Haines, D.K.; Bari, A.

    2005-01-01

    We present a model for efficiency of a detector counting gross α radioactivity from both thin and thick samples, corresponding to low and high sample masses in the counting planchette. The model includes self-absorption of α particles in the sample, energy loss in the absorber, range straggling, as well as detector edge effects. The surface roughness of the sample is treated in terms of fractal geometry. The model reveals a linear dependence of the detector efficiency on the sample mass, for low masses, as well as a power-law dependence for high masses. It is, therefore, named the linear-power-law (LPL) model. In addition, we consider an empirical power-law (EPL) curve, and an exponential (EXP) curve. A comparison is made of the LPL, EPL, and EXP fits to the experimental α mass-efficiency data from gas-proportional detectors for selected radionuclides: 238 U, 230 Th, 239 Pu, 241 Am, and 244 Cm. Based on this comparison, we recommend working equations for fitting mass-efficiency data. Measurement of α radioactivity from a thick sample can determine the fractal dimension of its surface

  15. PockDrug: A Model for Predicting Pocket Druggability That Overcomes Pocket Estimation Uncertainties.

    Science.gov (United States)

    Borrel, Alexandre; Regad, Leslie; Xhaard, Henri; Petitjean, Michel; Camproux, Anne-Claude

    2015-04-27

    Predicting protein druggability is a key interest in the target identification phase of drug discovery. Here, we assess the pocket estimation methods' influence on druggability predictions by comparing statistical models constructed from pockets estimated using different pocket estimation methods: a proximity of either 4 or 5.5 Å to a cocrystallized ligand or DoGSite and fpocket estimation methods. We developed PockDrug, a robust pocket druggability model that copes with uncertainties in pocket boundaries. It is based on a linear discriminant analysis from a pool of 52 descriptors combined with a selection of the most stable and efficient models using different pocket estimation methods. PockDrug retains the best combinations of three pocket properties which impact druggability: geometry, hydrophobicity, and aromaticity. It results in an average accuracy of 87.9% ± 4.7% using a test set and exhibits higher accuracy (∼5-10%) than previous studies that used an identical apo set. In conclusion, this study confirms the influence of pocket estimation on pocket druggability prediction and proposes PockDrug as a new model that overcomes pocket estimation variability.

  16. Development and Validation of a Multidisciplinary Tool for Accurate and Efficient Rotorcraft Noise Prediction (MUTE)

    Science.gov (United States)

    Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris

    2011-01-01

    A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.

  17. Toward an Efficient Prediction of Solar Flares: Which Parameters, and How?

    Directory of Open Access Journals (Sweden)

    Manolis K. Georgoulis

    2013-11-01

    Full Text Available Solar flare prediction has become a forefront topic in contemporary solar physics, with numerous published methods relying on numerous predictive parameters, that can even be divided into parameter classes. Attempting further insight, we focus on two popular classes of flare-predictive parameters, namely multiscale (i.e., fractal and multifractal and proxy (i.e., morphological parameters, and we complement our analysis with a study of the predictive capability of fundamental physical parameters (i.e., magnetic free energy and relative magnetic helicity. Rather than applying the studied parameters to a comprehensive statistical sample of flaring and non-flaring active regions, that was the subject of our previous studies, the novelty of this work is their application to an exceptionally long and high-cadence time series of the intensely eruptive National Oceanic and Atmospheric Administration (NOAA active region (AR 11158, observed by the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory. Aiming for a detailed study of the temporal evolution of each parameter, we seek distinctive patterns that could be associated with the four largest flares in the AR in the course of its five-day observing interval. We find that proxy parameters only tend to show preflare impulses that are practical enough to warrant subsequent investigation with sufficient statistics. Combining these findings with previous results, we conclude that: (i carefully constructed, physically intuitive proxy parameters may be our best asset toward an efficient future flare-forecasting; and (ii the time series of promising parameters may be as important as their instantaneous values. Value-based prediction is the only approach followed so far. Our results call for novel signal and/or image processing techniques to efficiently utilize combined amplitude and temporal-profile information to optimize the inferred solar-flare probabilities.

  18. Prediction and Validation of Heat Release Direct Injection Diesel Engine Using Multi-Zone Model

    Science.gov (United States)

    Anang Nugroho, Bagus; Sugiarto, Bambang; Prawoto; Shalahuddin, Lukman

    2014-04-01

    The objective of this study is to develop simulation model which capable to predict heat release of diesel combustion accurately in efficient computation time. A multi-zone packet model has been applied to solve the combustion phenomena inside diesel cylinder. The model formulations are presented first and then the numerical results are validated on a single cylinder direct injection diesel engine at various engine speed and timing injections. The model were found to be promising to fulfill the objective above.

  19. A Prediction Mechanism of Energy Consumption in Residential Buildings Using Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Israr Ullah

    2018-02-01

    Full Text Available Internet of Things (IoT is considered as one of the future disruptive technologies, which has the potential to bring positive change in human lifestyle and uplift living standards. Many IoT-based applications have been designed in various fields, e.g., security, health, education, manufacturing, transportation, etc. IoT has transformed conventional homes into Smart homes. By attaching small IoT devices to various appliances, we cannot only monitor but also control indoor environment as per user demand. Intelligent IoT devices can also be used for optimal energy utilization by operating the associated equipment only when it is needed. In this paper, we have proposed a Hidden Markov Model based algorithm to predict energy consumption in Korean residential buildings using data collected through smart meters. We have used energy consumption data collected from four multi-storied buildings located in Seoul, South Korea for model validation and results analysis. Proposed model prediction results are compared with three well-known prediction algorithms i.e., Support Vector Machine (SVM, Artificial Neural Network (ANN and Classification and Regression Trees (CART. Comparative analysis shows that our proposed model achieves 2.96 % better than ANN results in terms of root mean square error metric, 6.09 % better than SVM and 9.03 % better than CART results. To further establish and validate prediction results of our proposed model, we have performed temporal granularity analysis. For this purpose, we have evaluated our proposed model for hourly, daily and weekly data aggregation. Prediction accuracy in terms of root mean square error metric for hourly, daily and weekly data is 2.62, 1.54 and 0.46, respectively. This shows that our model prediction accuracy improves for coarse grain data. Higher prediction accuracy gives us confidence to further explore its application in building control systems for achieving better energy efficiency.

  20. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  1. Energy technologies and energy efficiency in economic modelling

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    1998-01-01

    This paper discusses different approaches to incorporating energy technologies and technological development in energy-economic models. Technological development is a very important issue in long-term energy demand projections and in environmental analyses. Different assumptions on technological ...... of renewable energy and especially wind power will increase the rate of efficiency improvement. A technologically based model in this case indirectly makes the energy efficiency endogenous in the aggregate energy-economy model....... technological development. This paper examines the effect on aggregate energy efficiency of using technological models to describe a number of specific technologies and of incorporating these models in an economic model. Different effects from the technology representation are illustrated. Vintage effects...... illustrates the dependence of average efficiencies and productivity on capacity utilisation rates. In the long run regulation induced by environmental policies are also very important for the improvement of aggregate energy efficiency in the energy supply sector. A Danish policy to increase the share...

  2. Modeling adaptation of carbon use efficiency in microbial communities

    Directory of Open Access Journals (Sweden)

    Steven D Allison

    2014-10-01

    Full Text Available In new microbial-biogeochemical models, microbial carbon use efficiency (CUE is often assumed to decline with increasing temperature. Under this assumption, soil carbon losses under warming are small because microbial biomass declines. Yet there is also empirical evidence that CUE may adapt (i.e. become less sensitive to warming, thereby mitigating negative effects on microbial biomass. To analyze potential mechanisms of CUE adaptation, I used two theoretical models to implement a tradeoff between microbial uptake rate and CUE. This rate-yield tradeoff is based on thermodynamic principles and suggests that microbes with greater investment in resource acquisition should have lower CUE. Microbial communities or individuals could adapt to warming by reducing investment in enzymes and uptake machinery. Consistent with this idea, a simple analytical model predicted that adaptation can offset 50% of the warming-induced decline in CUE. To assess the ecosystem implications of the rate-yield tradeoff, I quantified CUE adaptation in a spatially-structured simulation model with 100 microbial taxa and 12 soil carbon substrates. This model predicted much lower CUE adaptation, likely due to additional physiological and ecological constraints on microbes. In particular, specific resource acquisition traits are needed to maintain stoichiometric balance, and taxa with high CUE and low enzyme investment rely on low-yield, high-enzyme neighbors to catalyze substrate degradation. In contrast to published microbial models, simulations with greater CUE adaptation also showed greater carbon storage under warming. This pattern occurred because microbial communities with stronger CUE adaptation produced fewer degradative enzymes, despite increases in biomass. Thus the rate-yield tradeoff prevents CUE adaptation from driving ecosystem carbon loss under climate warming.

  3. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  4. Comparison of strategies for model predictive control for home heating in future energy systems

    DEFF Research Database (Denmark)

    Vogler-Finck, Pierre Jacques Camille; Popovski, Petar; Wisniewski, Rafal

    2017-01-01

    Model predictive control is seen as one of the key future enabler in increasing energy efficiency in buildings. This paper presents a comparison of the performance of the control for different formulations of the objective function. This comparison is made in a simulation study on a single buildi...

  5. Development of Prediction Model and Experimental Validation in Predicting the Curcumin Content of Turmeric (Curcuma longa L.).

    Science.gov (United States)

    Akbar, Abdul; Kuanar, Ananya; Joshi, Raj K; Sandeep, I S; Mohanty, Sujata; Naik, Pradeep K; Mishra, Antaryami; Nayak, Sanghamitra

    2016-01-01

    The drug yielding potential of turmeric ( Curcuma longa L.) is largely due to the presence of phyto-constituent 'curcumin.' Curcumin has been found to possess a myriad of therapeutic activities ranging from anti-inflammatory to neuroprotective. Lack of requisite high curcumin containing genotypes and variation in the curcumin content of turmeric at different agro climatic regions are the major stumbling blocks in commercial production of turmeric. Curcumin content of turmeric is greatly influenced by environmental factors. Hence, a prediction model based on artificial neural network (ANN) was developed to map genome environment interaction basing on curcumin content, soli and climatic factors from different agroclimatic regions for prediction of maximum curcumin content at various sites to facilitate the selection of suitable region for commercial cultivation of turmeric. The ANN model was developed and tested using a data set of 119 generated by collecting samples from 8 different agroclimatic regions of Odisha. The curcumin content from these samples was measured that varied from 7.2% to 0.4%. The ANN model was trained with 11 parameters of soil and climatic factors as input and curcumin content as output. The results showed that feed-forward ANN model with 8 nodes (MLFN-8) was the most suitable one with R 2 value of 0.91. Sensitivity analysis revealed that minimum relative humidity, altitude, soil nitrogen content and soil pH had greater effect on curcumin content. This ANN model has shown proven efficiency for predicting and optimizing the curcumin content at a specific site.

  6. Development of prediction model and experimental validation in predicting the curcumin content of turmeric (Curcuma longa L.

    Directory of Open Access Journals (Sweden)

    Abdul Akbar

    2016-10-01

    Full Text Available The drug yielding potential of turmeric (Curcuma longa L. is largely due to the presence of phyto-constituent ‘curcumin’. Curcumin has been found to possess a myriad of therapeutic activities ranging from anti-inflammatory to neuroprotective. Lack of requisite high curcumin containing genotypes and variation in the curcumin content of turmeric at different agro climatic regions are the major stumbling blocks in commercial production of turmeric. Curcumin content of turmeric is greatly influenced by environmental factors. Hence, a prediction model based on artificial neural network (ANN was developed to map genome environment interaction basing on curcumin content, soli and climatic factors from different agroclimatic regions for prediction of maximum curcumin content at various sites to facilitate the selection of suitable region for commercial cultivation of turmeric. The ANN model was developed and tested using a data set of 119 generated by collecting samples from 8 different agroclimatic regions of Odisha. The curcumin content from these samples was measured that varied from 7.2% to 0.4%. The ANN model was trained with 11 parameters of soil and climatic factors as input and curcumin content as output. The results showed that feed-forward ANN model with 8 nodes (MLFN-8 was the most suitable one with R2 value of 0.91. Sensitivity analysis revealed that minimum relative humidity, altitude, soil nitrogen content and soil pH had greater effect on curcumin content. This ANN model has shown proven efficiency for predicting and optimizing the curcumin content at a specific site.

  7. A Genomics-Based Model for Prediction of Severe Bioprosthetic Mitral Valve Calcification.

    Science.gov (United States)

    Ponasenko, Anastasia V; Khutornaya, Maria V; Kutikhin, Anton G; Rutkovskaya, Natalia V; Tsepokina, Anna V; Kondyukova, Natalia V; Yuzhalin, Arseniy E; Barbarash, Leonid S

    2016-08-31

    Severe bioprosthetic mitral valve calcification is a significant problem in cardiovascular surgery. Unfortunately, clinical markers did not demonstrate efficacy in prediction of severe bioprosthetic mitral valve calcification. Here, we examined whether a genomics-based approach is efficient in predicting the risk of severe bioprosthetic mitral valve calcification. A total of 124 consecutive Russian patients who underwent mitral valve replacement surgery were recruited. We investigated the associations of the inherited variation in innate immunity, lipid metabolism and calcium metabolism genes with severe bioprosthetic mitral valve calcification. Genotyping was conducted utilizing the TaqMan assay. Eight gene polymorphisms were significantly associated with severe bioprosthetic mitral valve calcification and were therefore included into stepwise logistic regression which identified male gender, the T/T genotype of the rs3775073 polymorphism within the TLR6 gene, the C/T genotype of the rs2229238 polymorphism within the IL6R gene, and the A/A genotype of the rs10455872 polymorphism within the LPA gene as independent predictors of severe bioprosthetic mitral valve calcification. The developed genomics-based model had fair predictive value with area under the receiver operating characteristic (ROC) curve of 0.73. In conclusion, our genomics-based approach is efficient for the prediction of severe bioprosthetic mitral valve calcification.

  8. Predictive user modeling with actionable attributes

    NARCIS (Netherlands)

    Zliobaite, I.; Pechenizkiy, M.

    2013-01-01

    Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target

  9. A Performance-Prediction Model for PIC Applications on Clusters of Symmetric MultiProcessors: Validation with Hierarchical HPF+OpenMP Implementation

    Directory of Open Access Journals (Sweden)

    Sergio Briguglio

    2003-01-01

    Full Text Available A performance-prediction model is presented, which describes different hierarchical workload decomposition strategies for particle in cell (PIC codes on Clusters of Symmetric MultiProcessors. The devised workload decomposition is hierarchically structured: a higher-level decomposition among the computational nodes, and a lower-level one among the processors of each computational node. Several decomposition strategies are evaluated by means of the prediction model, with respect to the memory occupancy, the parallelization efficiency and the required programming effort. Such strategies have been implemented by integrating the high-level languages High Performance Fortran (at the inter-node stage and OpenMP (at the intra-node one. The details of these implementations are presented, and the experimental values of parallelization efficiency are compared with the predicted results.

  10. A Traffic Prediction Algorithm for Street Lighting Control Efficiency

    Directory of Open Access Journals (Sweden)

    POPA Valentin

    2013-01-01

    Full Text Available This paper presents the development of a traffic prediction algorithm that can be integrated in a street lighting monitoring and control system. The prediction algorithm must enable the reduction of energy costs and improve energy efficiency by decreasing the light intensity depending on the traffic level. The algorithm analyses and processes the information received at the command center based on the traffic level at different moments. The data is collected by means of the Doppler vehicle detection sensors integrated within the system. Thus, two methods are used for the implementation of the algorithm: a neural network and a k-NN (k-Nearest Neighbor prediction algorithm. For 500 training cycles, the mean square error of the neural network is 9.766 and for 500.000 training cycles the error amounts to 0.877. In case of the k-NN algorithm the error increases from 8.24 for k=5 to 12.27 for a number of 50 neighbors. In terms of a root means square error parameter, the use of a neural network ensures the highest performance level and can be integrated in a street lighting control system.

  11. Genome-wide prediction of traits with different genetic architecture through efficient variable selection.

    Science.gov (United States)

    Wimmer, Valentin; Lehermeier, Christina; Albrecht, Theresa; Auinger, Hans-Jürgen; Wang, Yu; Schön, Chris-Carolin

    2013-10-01

    In genome-based prediction there is considerable uncertainty about the statistical model and method required to maximize prediction accuracy. For traits influenced by a small number of quantitative trait loci (QTL), predictions are expected to benefit from methods performing variable selection [e.g., BayesB or the least absolute shrinkage and selection operator (LASSO)] compared to methods distributing effects across the genome [ridge regression best linear unbiased prediction (RR-BLUP)]. We investigate the assumptions underlying successful variable selection by combining computer simulations with large-scale experimental data sets from rice (Oryza sativa L.), wheat (Triticum aestivum L.), and Arabidopsis thaliana (L.). We demonstrate that variable selection can be successful when the number of phenotyped individuals is much larger than the number of causal mutations contributing to the trait. We show that the sample size required for efficient variable selection increases dramatically with decreasing trait heritabilities and increasing extent of linkage disequilibrium (LD). We contrast and discuss contradictory results from simulation and experimental studies with respect to superiority of variable selection methods over RR-BLUP. Our results demonstrate that due to long-range LD, medium heritabilities, and small sample sizes, superiority of variable selection methods cannot be expected in plant breeding populations even for traits like FRIGIDA gene expression in Arabidopsis and flowering time in rice, assumed to be influenced by a few major QTL. We extend our conclusions to the analysis of whole-genome sequence data and infer upper bounds for the number of causal mutations which can be identified by LASSO. Our results have major impact on the choice of statistical method needed to make credible inferences about genetic architecture and prediction accuracy of complex traits.

  12. Efficient estimation of feedback effects with application to climate models

    International Nuclear Information System (INIS)

    Cacugi, D.G.; Hall, M.C.G.

    1984-01-01

    This work presents an efficient method for calculating the sensitivity of a mathematical model's result to feedback. Feedback is defined in terms of an operator acting on the model's dependent variables. The sensitivity to feedback is defined as a functional derivative, and a method is presented to evaluate this derivative using adjoint functions. Typically, this method allows the individual effect of many different feedbacks to be estimated with a total additional computing time comparable to only one recalculation. The effects on a CO 2 -doubling experiment of actually incorporating surface albedo and water vapor feedbacks in radiative-convective model are compared with sensivities calculated using adjoint functions. These sensitivities predict the actual effects of feedback with at least the correct sign and order of magnitude. It is anticipated that this method of estimation the effect of feedback will be useful for more complex models where extensive recalculations for each of a variety of different feedbacks is impractical

  13. Application of Pareto-efficient combustion modeling framework to large eddy simulations of turbulent reacting flows

    Science.gov (United States)

    Wu, Hao; Ihme, Matthias

    2017-11-01

    The modeling of turbulent combustion requires the consideration of different physico-chemical processes, involving a vast range of time and length scales as well as a large number of scalar quantities. To reduce the computational complexity, various combustion models are developed. Many of them can be abstracted using a lower-dimensional manifold representation. A key issue in using such lower-dimensional combustion models is the assessment as to whether a particular combustion model is adequate in representing a certain flame configuration. The Pareto-efficient combustion (PEC) modeling framework was developed to perform dynamic combustion model adaptation based on various existing manifold models. In this work, the PEC model is applied to a turbulent flame simulation, in which a computationally efficient flamelet-based combustion model is used in together with a high-fidelity finite-rate chemistry model. The combination of these two models achieves high accuracy in predicting pollutant species at a relatively low computational cost. The relevant numerical methods and parallelization techniques are also discussed in this work.

  14. Probabilistic Forecasting of Photovoltaic Generation: An Efficient Statistical Approach

    DEFF Research Database (Denmark)

    Wan, Can; Lin, Jin; Song, Yonghua

    2017-01-01

    This letter proposes a novel efficient probabilistic forecasting approach to accurately quantify the variability and uncertainty of the power production from photovoltaic (PV) systems. Distinguished from most existing models, a linear programming based prediction interval construction model for P...... power generation is proposed based on extreme learning machine and quantile regression, featuring high reliability and computational efficiency. The proposed approach is validated through the numerical studies on PV data from Denmark.......This letter proposes a novel efficient probabilistic forecasting approach to accurately quantify the variability and uncertainty of the power production from photovoltaic (PV) systems. Distinguished from most existing models, a linear programming based prediction interval construction model for PV...

  15. A Coupled Probabilistic Wake Vortex and Aircraft Response Prediction Model

    Science.gov (United States)

    Gloudemans, Thijs; Van Lochem, Sander; Ras, Eelco; Malissa, Joel; Ahmad, Nashat N.; Lewis, Timothy A.

    2016-01-01

    Wake vortex spacing standards along with weather and runway occupancy time, restrict terminal area throughput and impose major constraints on the overall capacity and efficiency of the National Airspace System (NAS). For more than two decades, the National Aeronautics and Space Administration (NASA) has been conducting research on characterizing wake vortex behavior in order to develop fast-time wake transport and decay prediction models. It is expected that the models can be used in the systems level design of advanced air traffic management (ATM) concepts that safely increase the capacity of the NAS. It is also envisioned that at a later stage of maturity, these models could potentially be used operationally, in groundbased spacing and scheduling systems as well as on the flight deck.

  16. Predictive control strategy of a gas turbine for improvement of combined cycle power plant dynamic performance and efficiency.

    Science.gov (United States)

    Mohamed, Omar; Wang, Jihong; Khalil, Ashraf; Limhabrash, Marwan

    2016-01-01

    This paper presents a novel strategy for implementing model predictive control (MPC) to a large gas turbine power plant as a part of our research progress in order to improve plant thermal efficiency and load-frequency control performance. A generalized state space model for a large gas turbine covering the whole steady operational range is designed according to subspace identification method with closed loop data as input to the identification algorithm. Then the model is used in developing a MPC and integrated into the plant existing control strategy. The strategy principle is based on feeding the reference signals of the pilot valve, natural gas valve, and the compressor pressure ratio controller with the optimized decisions given by the MPC instead of direct application of the control signals. If the set points for the compressor controller and turbine valves are sent in a timely manner, there will be more kinetic energy in the plant to release faster responses on the output and the overall system efficiency is improved. Simulation results have illustrated the feasibility of the proposed application that has achieved significant improvement in the frequency variations and load following capability which are also translated to be improvements in the overall combined cycle thermal efficiency of around 1.1 % compared to the existing one.

  17. Predicting Lymph Node Metastasis in Endometrial Cancer Using Serum CA125 Combined with Immunohistochemical Markers PR and Ki67, and a Comparison with Other Prediction Models.

    Directory of Open Access Journals (Sweden)

    Bingyi Yang

    Full Text Available We aimed to evaluate the value of immunohistochemical markers and serum CA125 in predicting the risk of lymph node metastasis (LNM in women with endometrial cancer and to identify a low-risk group of LNM. The medical records of 370 patients with endometrial endometrioid adenocarcinoma who underwent surgical staging in the Obstetrics & Gynecology Hospital of Fudan University were collected and retrospectively reviewed. Immunohistochemical markers were screened. A model using serum cancer antigen 125 (CA125 level, the immunohistochemical markers progesterone receptor (PR and Ki67 was created for prediction of LNM. A predicted probability of 4% among these patients was defined as low risk. The developed model was externally validated in 200 patients from Shanghai Cancer Center. The efficiency of the model was compared with three other reported prediction models. Patients with serum CA125 50% and Ki67 < 40% in cancer lesion were defined as low risk for LNM. The model showed good discrimination with an area under the receiver operating characteristic curve of 0.82. The model classified 61.9% (229/370 of patients as being at low risk for LNM. Among these 229 patients, 6 patients (2.6% had LNM and the negative predictive value was 97.4% (223/229. The sensitivity and specificity of the model were 84.6% and 67.4% respectively. In the validation cohort, the model classified 59.5% (119/200 of patients as low-risk, 3 out of these 119 patients (2.5% has LNM. Our model showed a predictive power similar to those of two previously reported prediction models. The prediction model using serum CA125 and the immunohistochemical markers PR and Ki67 is useful to predict patients with a low risk of LNM and has the potential to provide valuable guidance to clinicians in the treatment of patients with endometrioid endometrial cancer.

  18. Dinucleotide controlled null models for comparative RNA gene prediction.

    Science.gov (United States)

    Gesell, Tanja; Washietl, Stefan

    2008-05-27

    Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require randomization of multiple alignments can be considered. SISSIz

  19. Dinucleotide controlled null models for comparative RNA gene prediction

    Directory of Open Access Journals (Sweden)

    Gesell Tanja

    2008-05-01

    Full Text Available Abstract Background Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. Results We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. Conclusion SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require

  20. High-Efficiency Multiscale Modeling of Cell Deformations in Confined Microenvironments in Microcirculation and Microfluidic Devices

    Science.gov (United States)

    Lu, Huijie; Peng, Zhangli

    2017-11-01

    Our goal is to develop a high-efficiency multiscale modeling method to predict the stress and deformation of cells during the interactions with their microenvironments in microcirculation and microfluidic devices, including red blood cells (RBCs) and circulating tumor cells (CTCs). There are more than 1 billion people in the world suffering from RBC diseases, e.g. anemia, sickle cell diseases, and malaria. The mechanical properties of RBCs are changed in these diseases due to molecular structure alternations, which is not only important for understanding the disease pathology but also provides an opportunity for diagnostics. On the other hand, the mechanical properties of cancer cells are also altered compared to healthy cells. This can lead to acquired ability to cross the narrow capillary networks and endothelial gaps, which is crucial for metastasis, the leading cause of cancer mortality. Therefore, it is important to predict the deformation and stress of RBCs and CTCs in microcirculations. We are developing a high-efficiency multiscale model of cell-fluid interaction to study these two topics.

  1. An efficient mathematical model for air-breathing PEM fuel cells

    International Nuclear Information System (INIS)

    Ismail, M.S.; Ingham, D.B.; Hughes, K.J.; Ma, L.; Pourkashanian, M.

    2014-01-01

    Graphical abstract: The effects of the ambient humidity on the performance of air-breathing PEM fuel cells become more pronounced as the ambient temperature increases. The polarisation curves have been generated using the in-house developed MATLAB® application, Polarisation Curve Generator, which is available in the supplementary data. - Highlights: • An efficient mathematical model has been developed for an air-breathing PEM fuel cell. • The fuel cell performance is significantly over-predicted if the Joule and entropic heats are neglected. • The fuel cell performance is highly sensitive to the state of water at the thermodynamic equilibrium. • The cell potential dictates the favourable ambient conditions for the fuel cell. - Abstract: A simple and efficient mathematical model for air-breathing proton exchange membrane (PEM) fuel cells has been built. One of the major objectives of this study is to investigate the effects of the Joule and entropic heat sources, which are often neglected, on the performance of air-breathing PEM fuel cells. It is found that the fuel cell performance is significantly over-predicted if one or both of these heat sources is not incorporated into the model. Also, it is found that the performance of the fuel cell is highly sensitive to the state of the water at the thermodynamic equilibrium magnitude as both the entropic heat and the Nernst potential considerably increase if water is assumed to be produced in liquid form rather than in vapour form. Further, the heat of condensation is shown to be small and therefore, under single-phase modelling, has a negligible effect on the performance of the fuel cell. Finally, the favourable ambient conditions depend on the operating cell potential. At intermediate cell potentials, a mild ambient temperature and low humidity are favoured to maintain high membrane conductivity and mitigate water flooding. At low cell potentials, low ambient temperature and high humidity are favoured to

  2. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    Science.gov (United States)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  3. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  4. A cycle simulation model for predicting the performance of a diesel engine fuelled by diesel and biodiesel blends

    International Nuclear Information System (INIS)

    Gogoi, T.K.; Baruah, D.C.

    2010-01-01

    Among the alternative fuels, biodiesel and its blends are considered suitable and the most promising fuel for diesel engine. The properties of biodiesel are found similar to that of diesel. Many researchers have experimentally evaluated the performance characteristics of conventional diesel engines fuelled by biodiesel and its blends. However, experiments require enormous effort, money and time. Hence, a cycle simulation model incorporating a thermodynamic based single zone combustion model is developed to predict the performance of diesel engine. The effect of engine speed and compression ratio on brake power and brake thermal efficiency is analysed through the model. The fuel considered for the analysis are diesel, 20%, 40%, 60% blending of diesel and biodiesel derived from Karanja oil (Pongamia Glabra). The model predicts similar performance with diesel, 20% and 40% blending. However, with 60% blending, it reveals better performance in terms of brake power and brake thermal efficiency.

  5. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  6. Model predictive control of an air suspension system with damping multi-mode switching damper based on hybrid model

    Science.gov (United States)

    Sun, Xiaoqiang; Yuan, Chaochun; Cai, Yingfeng; Wang, Shaohua; Chen, Long

    2017-09-01

    This paper presents the hybrid modeling and the model predictive control of an air suspension system with damping multi-mode switching damper. Unlike traditional damper with continuously adjustable damping, in this study, a new damper with four discrete damping modes is applied to vehicle semi-active air suspension. The new damper can achieve different damping modes by just controlling the on-off statuses of two solenoid valves, which makes its damping adjustment more efficient and more reliable. However, since the damping mode switching induces different modes of operation, the air suspension system with the new damper poses challenging hybrid control problem. To model both the continuous/discrete dynamics and the switching between different damping modes, the framework of mixed logical dynamical (MLD) systems is used to establish the system hybrid model. Based on the resulting hybrid dynamical model, the system control problem is recast as a model predictive control (MPC) problem, which allows us to optimize the switching sequences of the damping modes by taking into account the suspension performance requirements. Numerical simulations results demonstrate the efficacy of the proposed control method finally.

  7. The string prediction models as invariants of time series in the forex market

    Science.gov (United States)

    Pincak, R.

    2013-12-01

    In this paper we apply a new approach of string theory to the real financial market. The models are constructed with an idea of prediction models based on the string invariants (PMBSI). The performance of PMBSI is compared to support vector machines (SVM) and artificial neural networks (ANN) on an artificial and a financial time series. A brief overview of the results and analysis is given. The first model is based on the correlation function as invariant and the second one is an application based on the deviations from the closed string/pattern form (PMBCS). We found the difference between these two approaches. The first model cannot predict the behavior of the forex market with good efficiency in comparison with the second one which is, in addition, able to make relevant profit per year. The presented string models could be useful for portfolio creation and financial risk management in the banking sector as well as for a nonlinear statistical approach to data optimization.

  8. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  9. Product unit neural network models for predicting the growth limits of Listeria monocytogenes.

    Science.gov (United States)

    Valero, A; Hervás, C; García-Gimeno, R M; Zurera, G

    2007-08-01

    A new approach to predict the growth/no growth interface of Listeria monocytogenes as a function of storage temperature, pH, citric acid (CA) and ascorbic acid (AA) is presented. A linear logistic regression procedure was performed and a non-linear model was obtained by adding new variables by means of a Neural Network model based on Product Units (PUNN). The classification efficiency of the training data set and the generalization data of the new Logistic Regression PUNN model (LRPU) were compared with Linear Logistic Regression (LLR) and Polynomial Logistic Regression (PLR) models. 92% of the total cases from the LRPU model were correctly classified, an improvement on the percentage obtained using the PLR model (90%) and significantly higher than the results obtained with the LLR model, 80%. On the other hand predictions of LRPU were closer to data observed which permits to design proper formulations in minimally processed foods. This novel methodology can be applied to predictive microbiology for describing growth/no growth interface of food-borne microorganisms such as L. monocytogenes. The optimal balance is trying to find models with an acceptable interpretation capacity and with good ability to fit the data on the boundaries of variable range. The results obtained conclude that these kinds of models might well be very a valuable tool for mathematical modeling.

  10. A prediction model of compressor with variable-geometry diffuser based on elliptic equation and partial least squares.

    Science.gov (United States)

    Li, Xu; Yang, Chuanlei; Wang, Yinyan; Wang, Hechun

    2018-01-01

    To achieve a much more extensive intake air flow range of the diesel engine, a variable-geometry compressor (VGC) is introduced into a turbocharged diesel engine. However, due to the variable diffuser vane angle (DVA), the prediction for the performance of the VGC becomes more difficult than for a normal compressor. In the present study, a prediction model comprising an elliptical equation and a PLS (partial least-squares) model was proposed to predict the performance of the VGC. The speed lines of the pressure ratio map and the efficiency map were fitted with the elliptical equation, and the coefficients of the elliptical equation were introduced into the PLS model to build the polynomial relationship between the coefficients and the relative speed, the DVA. Further, the maximal order of the polynomial was investigated in detail to reduce the number of sub-coefficients and achieve acceptable fit accuracy simultaneously. The prediction model was validated with sample data and in order to present the superiority of compressor performance prediction, the prediction results of this model were compared with those of the look-up table and back-propagation neural networks (BPNNs). The validation and comparison results show that the prediction accuracy of the new developed model is acceptable, and this model is much more suitable than the look-up table and the BPNN methods under the same condition in VGC performance prediction. Moreover, the new developed prediction model provides a novel and effective prediction solution for the VGC and can be used to improve the accuracy of the thermodynamic model for turbocharged diesel engines in the future.

  11. The ability of in vitro antioxidant assays to predict the efficiency of a cod protein hydrolysate and brown seaweed extract to prevent oxidation in marine food model systems.

    Science.gov (United States)

    Jónsdóttir, Rósa; Geirsdóttir, Margrét; Hamaguchi, Patricia Y; Jamnik, Polona; Kristinsson, Hordur G; Undeland, Ingrid

    2016-04-01

    The ability of different in vitro antioxidant assays to predict the efficiency of cod protein hydrolysate (CPH) and Fucus vesiculosus ethyl acetate extract (EA) towards lipid oxidation in haemoglobin-fortified washed cod mince and iron-containing cod liver oil emulsion was evaluated. The progression of oxidation was followed by sensory analysis, lipid hydroperoxides and thiobarbituric acid-reactive substances (TBARS) in both systems, as well as loss of redness and protein carbonyls in the cod system. The in vitro tests revealed high reducing capacity, high DPPH radical scavenging properties and a high oxygen radical absorbance capacity (ORAC) value of the EA which also inhibited lipid and protein oxidation in the cod model system. The CPH had a high metal chelating capacity and was efficient against oxidation in the cod liver oil emulsion. The results indicate that the F. vesiculosus extract has a potential as an excellent natural antioxidant against lipid oxidation in fish muscle foods while protein hydrolysates are more promising for fish oil emulsions. The usefulness of in vitro assays to predict the antioxidative properties of new natural ingredients in foods thus depends on the knowledge about the food systems, particularly the main pro-oxidants present. © 2015 Society of Chemical Industry.

  12. Integration of Tuyere, Raceway and Shaft Models for Predicting Blast Furnace Process

    Science.gov (United States)

    Fu, Dong; Tang, Guangwu; Zhao, Yongfu; D'Alessio, John; Zhou, Chenn Q.

    2018-06-01

    A novel modeling strategy is presented for simulating the blast furnace iron making process. Such physical and chemical phenomena are taking place across a wide range of length and time scales, and three models are developed to simulate different regions of the blast furnace, i.e., the tuyere model, the raceway model and the shaft model. This paper focuses on the integration of the three models to predict the entire blast furnace process. Mapping output and input between models and an iterative scheme are developed to establish communications between models. The effects of tuyere operation and burden distribution on blast furnace fuel efficiency are investigated numerically. The integration of different models provides a way to realistically simulate the blast furnace by improving the modeling resolution on local phenomena and minimizing the model assumptions.

  13. REALIGNED MODEL PREDICTIVE CONTROL OF A PROPYLENE DISTILLATION COLUMN

    Directory of Open Access Journals (Sweden)

    A. I. Hinojosa

    Full Text Available Abstract In the process industry, advanced controllers usually aim at an economic objective, which usually requires closed-loop stability and constraints satisfaction. In this paper, the application of a MPC in the optimization structure of an industrial Propylene/Propane (PP splitter is tested with a controller based on a state space model, which is suitable for heavily disturbed environments. The simulation platform is based on the integration of the commercial dynamic simulator Dynsim® and the rigorous steady-state optimizer ROMeo® with the real-time facilities of Matlab. The predictive controller is the Infinite Horizon Model Predictive Control (IHMPC, based on a state-space model that that does not require the use of a state observer because the non-minimum state is built with the past inputs and outputs. The controller considers the existence of zone control of the outputs and optimizing targets for the inputs. We verify that the controller is efficient to control the propylene distillation system in a disturbed scenario when compared with a conventional controller based on a state observer. The simulation results show a good performance in terms of stability of the controller and rejection of large disturbances in the composition of the feed of the propylene distillation column.

  14. Estimating confidence intervals in predicted responses for oscillatory biological models.

    Science.gov (United States)

    St John, Peter C; Doyle, Francis J

    2013-07-29

    The dynamics of gene regulation play a crucial role in a cellular control: allowing the cell to express the right proteins to meet changing needs. Some needs, such as correctly anticipating the day-night cycle, require complicated oscillatory features. In the analysis of gene regulatory networks, mathematical models are frequently used to understand how a network's structure enables it to respond appropriately to external inputs. These models typically consist of a set of ordinary differential equations, describing a network of biochemical reactions, and unknown kinetic parameters, chosen such that the model best captures experimental data. However, since a model's parameter values are uncertain, and since dynamic responses to inputs are highly parameter-dependent, it is difficult to assess the confidence associated with these in silico predictions. In particular, models with complex dynamics - such as oscillations - must be fit with computationally expensive global optimization routines, and cannot take advantage of existing measures of identifiability. Despite their difficulty to model mathematically, limit cycle oscillations play a key role in many biological processes, including cell cycling, metabolism, neuron firing, and circadian rhythms. In this study, we employ an efficient parameter estimation technique to enable a bootstrap uncertainty analysis for limit cycle models. Since the primary role of systems biology models is the insight they provide on responses to rate perturbations, we extend our uncertainty analysis to include first order sensitivity coefficients. Using a literature model of circadian rhythms, we show how predictive precision is degraded with decreasing sample points and increasing relative error. Additionally, we show how this method can be used for model discrimination by comparing the output identifiability of two candidate model structures to published literature data. Our method permits modellers of oscillatory systems to confidently

  15. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  16. Evaluation of Deep Learning Models for Predicting CO2 Flux

    Science.gov (United States)

    Halem, M.; Nguyen, P.; Frankel, D.

    2017-12-01

    Artificial neural networks have been employed to calculate surface flux measurements from station data because they are able to fit highly nonlinear relations between input and output variables without knowing the detail relationships between the variables. However, the accuracy in performing neural net estimates of CO2 flux from observations of CO2 and other atmospheric variables is influenced by the architecture of the neural model, the availability, and complexity of interactions between physical variables such as wind, temperature, and indirect variables like latent heat, and sensible heat, etc. We evaluate two deep learning models, feed forward and recurrent neural network models to learn how they each respond to the physical measurements, time dependency of the measurements of CO2 concentration, humidity, pressure, temperature, wind speed etc. for predicting the CO2 flux. In this paper, we focus on a) building neural network models for estimating CO2 flux based on DOE data from tower Atmospheric Radiation Measurement data; b) evaluating the impact of choosing the surface variables and model hyper-parameters on the accuracy and predictions of surface flux; c) assessing the applicability of the neural network models on estimate CO2 flux by using OCO-2 satellite data; d) studying the efficiency of using GPU-acceleration for neural network performance using IBM Power AI deep learning software and packages on IBM Minsky system.

  17. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  18. Machine Learning Approach for Prediction and Understanding of Glass-Forming Ability.

    Science.gov (United States)

    Sun, Y T; Bai, H Y; Li, M Z; Wang, W H

    2017-07-20

    The prediction of the glass-forming ability (GFA) by varying the composition of alloys is a challenging problem in glass physics, as well as a problem for industry, with enormous financial ramifications. Although different empirical guides for the prediction of GFA were established over decades, a comprehensive model or approach that is able to deal with as many variables as possible simultaneously for efficiently predicting good glass formers is still highly desirable. Here, by applying the support vector classification method, we develop models for predicting the GFA of binary metallic alloys from random compositions. The effect of different input descriptors on GFA were evaluated, and the best prediction model was selected, which shows that the information related to liquidus temperatures plays a key role in the GFA of alloys. On the basis of this model, good glass formers can be predicted with high efficiency. The prediction efficiency can be further enhanced by improving larger database and refined input descriptor selection. Our findings suggest that machine learning is very powerful and efficient and has great potential for discovering new metallic glasses with good GFA.

  19. Efficient Generation and Selection of Virtual Populations in Quantitative Systems Pharmacology Models.

    Science.gov (United States)

    Allen, R J; Rieger, T R; Musante, C J

    2016-03-01

    Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed "virtual patients." In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations.

  20. QoS prediction for web services based on user-trust propagation model

    Science.gov (United States)

    Thinh, Le-Van; Tu, Truong-Dinh

    2017-10-01

    There is an important online role for Web service providers and users; however, the rapidly growing number of service providers and users, it can create some similar functions among web services. This is an exciting area for research, and researchers seek to to propose solutions for the best service to users. Collaborative filtering (CF) algorithms are widely used in recommendation systems, although these are less effective for cold-start users. Recently, some recommender systems have been developed based on social network models, and the results show that social network models have better performance in terms of CF, especially for cold-start users. However, most social network-based recommendations do not consider the user's mood. This is a hidden source of information, and is very useful in improving prediction efficiency. In this paper, we introduce a new model called User-Trust Propagation (UTP). The model uses a combination of trust and the mood of users to predict the QoS value and matrix factorisation (MF), which is used to train the model. The experimental results show that the proposed model gives better accuracy than other models, especially for the cold-start problem.

  1. Numerical model for predicting thermodynamic cycle and thermal efficiency of a beta-type Stirling engine with rhombic-drive mechanism

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chin-Hsiang; Yu, Ying-Ju [Department of Aeronautics and Astronautics, National Cheng Kung University, No. 1, Ta-Shieh Road, Tainan 70101, Taiwan (China)

    2010-11-15

    This study is aimed at development of a numerical model for a beta-type Stirling engine with rhombic-drive mechanism. By taking into account the non-isothermal effects, the effectiveness of the regenerative channel, and the thermal resistance of the heating head, the energy equations for the control volumes in the expansion chamber, the compression chamber, and the regenerative channel can be derived and solved. Meanwhile, a fully developed flow velocity profile in the regenerative channel, in terms of the reciprocating velocity of the displacer and the instantaneous pressure difference between the expansion and the compression chambers, is derived for calculation of the mass flow rate through the regenerative channel. In this manner, the internal irreversibility caused by pressure difference in the two chambers and the viscous shear effects due to the motion of the reciprocating displacer on the fluid flow in the regenerative channel gap are included. Periodic variation of pressures, volumes, temperatures, masses, and heat transfers in the expansion and the compression chambers are predicted. A parametric study of the dependence of the power output and thermal efficiency on the geometrical and physical parameters, involving regenerative gap, distance between two gears, offset distance from the crank to the center of gear, and the heat source temperature, has been performed. (author)

  2. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  3. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  4. Introducing etch kernels for efficient pattern sampling and etch bias prediction

    Science.gov (United States)

    Weisbuch, François; Lutich, Andrey; Schatz, Jirka

    2018-01-01

    Successful patterning requires good control of the photolithography and etch processes. While compact litho models, mainly based on rigorous physics, can predict very well the contours printed in photoresist, pure empirical etch models are less accurate and more unstable. Compact etch models are based on geometrical kernels to compute the litho-etch biases that measure the distance between litho and etch contours. The definition of the kernels, as well as the choice of calibration patterns, is critical to get a robust etch model. This work proposes to define a set of independent and anisotropic etch kernels-"internal, external, curvature, Gaussian, z_profile"-designed to represent the finest details of the resist geometry to characterize precisely the etch bias at any point along a resist contour. By evaluating the etch kernels on various structures, it is possible to map their etch signatures in a multidimensional space and analyze them to find an optimal sampling of structures. The etch kernels evaluated on these structures were combined with experimental etch bias derived from scanning electron microscope contours to train artificial neural networks to predict etch bias. The method applied to contact and line/space layers shows an improvement in etch model prediction accuracy over standard etch model. This work emphasizes the importance of the etch kernel definition to characterize and predict complex etch effects.

  5. Modeling heterogeneous (co)variances from adjacent-SNP groups improves genomic prediction for milk protein composition traits

    DEFF Research Database (Denmark)

    Gebreyesus, Grum; Lund, Mogens Sandø; Buitenhuis, Albert Johannes

    2017-01-01

    Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci...... of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we...... developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls...

  6. Development and Analysis of Group Contribution Plus Models for Property Prediction of Organic Chemical Systems

    DEFF Research Database (Denmark)

    Mustaffa, Azizul Azri

    for the GIPs are then used in the UNIFAC model to calculate activity coefficients. This approach can increase the application range of any “host” UNIFAC model by providing a reliable predictive model towards fast and efficient product development. This PhD project is focused on the analysis and further......Prediction of properties is important in chemical process-product design. Reliable property models are needed for increasingly complex and wider range of chemicals. Group-contribution methods provide useful tool but there is a need to validate them and improve their accuracy when complex chemicals...... are present in the mixtures. In accordance with that, a combined group-contribution and atom connectivity approach that is able to extend the application range of property models has been developed for mixture properties. This so-called Group-ContributionPlus (GCPlus) approach is a hybrid model which combines...

  7. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  8. Sensitivity analysis of predictive models with an automated adjoint generator

    International Nuclear Information System (INIS)

    Pin, F.G.; Oblow, E.M.

    1987-01-01

    The adjoint method is a well established sensitivity analysis methodology that is particularly efficient in large-scale modeling problems. The coefficients of sensitivity of a given response with respect to every parameter involved in the modeling code can be calculated from the solution of a single adjoint run of the code. Sensitivity coefficients provide a quantitative measure of the importance of the model data in calculating the final results. The major drawback of the adjoint method is the requirement for calculations of very large numbers of partial derivatives to set up the adjoint equations of the model. ADGEN is a software system that has been designed to eliminate this drawback and automatically implement the adjoint formulation in computer codes. The ADGEN system will be described and its use for improving performance assessments and predictive simulations will be discussed. 8 refs., 1 fig

  9. Flight Test Maneuvers for Efficient Aerodynamic Modeling

    Science.gov (United States)

    Morelli, Eugene A.

    2011-01-01

    Novel flight test maneuvers for efficient aerodynamic modeling were developed and demonstrated in flight. Orthogonal optimized multi-sine inputs were applied to aircraft control surfaces to excite aircraft dynamic response in all six degrees of freedom simultaneously while keeping the aircraft close to chosen reference flight conditions. Each maneuver was designed for a specific modeling task that cannot be adequately or efficiently accomplished using conventional flight test maneuvers. All of the new maneuvers were first described and explained, then demonstrated on a subscale jet transport aircraft in flight. Real-time and post-flight modeling results obtained using equation-error parameter estimation in the frequency domain were used to show the effectiveness and efficiency of the new maneuvers, as well as the quality of the aerodynamic models that can be identified from the resultant flight data.

  10. Lower hybrid current drive: an overview of simulation models, benchmarking with experiment, and predictions for future devices

    International Nuclear Information System (INIS)

    Bonoli, P.T.; Barbato, E.; Imbeaux, F.

    2003-01-01

    This paper reviews the status of lower hybrid current drive (LHCD) simulation and modeling. We first discuss modules used for wave propagation, absorption, and current drive with particular emphasis placed on comparing exact numerical solutions of the Fokker Planck equation in 2-dimension with solution methods that employ 1-dimensional and adjoint approaches. We also survey model predictions for LHCD in past and present experiments showing detailed comparisons between simulated and observed current drive efficiencies and hard X-ray profiles. Finally we discuss several model predictions for lower hybrid current profile control in proposed next step reactor options. (authors)

  11. Modeling the Efficiency of a Germanium Detector

    Science.gov (United States)

    Hayton, Keith; Prewitt, Michelle; Quarles, C. A.

    2006-10-01

    We are using the Monte Carlo Program PENELOPE and the cylindrical geometry program PENCYL to develop a model of the detector efficiency of a planar Ge detector. The detector is used for x-ray measurements in an ongoing experiment to measure electron bremsstrahlung. While we are mainly interested in the efficiency up to 60 keV, the model ranges from 10.1 keV (below the Ge absorption edge at 11.1 keV) to 800 keV. Measurements of the detector efficiency have been made in a well-defined geometry with calibrated radioactive sources: Co-57, Se-75, Ba-133, Am-241 and Bi-207. The model is compared with the experimental measurements and is expected to provide a better interpolation formula for the detector efficiency than simply using x-ray absorption coefficients for the major constituents of the detector. Using PENELOPE, we will discuss several factors, such as Ge dead layer, surface ice layer and angular divergence of the source, that influence the efficiency of the detector.

  12. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  13. Energy Efficiency Modelling of Residential Air Source Heat Pump Water Heater

    Directory of Open Access Journals (Sweden)

    Cong Toan Tran

    2016-03-01

    Full Text Available The heat pump water heater is one of the most energy efficient technologies for heating water for household use. The present work proposes a simplified model of coefficient of performance and examines its predictive capability. The model is based on polynomial functions where the variables are temperatures and the coefficients are derived from the Australian standard test data, using regression technics. The model enables to estimate the coefficient of performance of the same heat pump water heater under other test standards (i.e. US, Japanese, European and Korean standards. The resulting estimations over a heat-up phase and a full test cycle including a draw off pattern are in close agreement with the measured data. Thus the model allows manufacturers to avoid the need to carry out physical tests for some standards and to reduce product cost. The limitations of the methodology proposed are also discussed.

  14. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    International Nuclear Information System (INIS)

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  15. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  16. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  17. A-DROP: A predictive model for the formation of oil particle aggregates (OPAs)

    Science.gov (United States)

    Zhao, Lin; Boufadel, Michel C.; Geng, Xiaolong; Lee, Kenneth; King, Thomas; Robinson, Brian; Fitzpatrick, Faith A.

    2016-01-01

    Oil–particle interactions play a major role in removal of free oil from the water column. We present a new conceptual–numerical model, A-DROP, to predict oil amount trapped in oil–particle aggregates. A new conceptual formulation of oil–particle coagulation efficiency is introduced to account for the effects of oil stabilization by particles, particle hydrophobicity, and oil–particle size ratio on OPA formation. A-DROP was able to closely reproduce the oil trapping efficiency reported in experimental studies. The model was then used to simulate the OPA formation in a typical nearshore environment. Modeling results indicate that the increase of particle concentration in the swash zone would speed up the oil–particle interaction process; but the oil amount trapped in OPAs did not correspond to the increase of particle concentration. The developed A-DROP model could become an important tool in understanding the natural removal of oil and developing oil spill countermeasures by means of oil–particle aggregation.

  18. Can phenological models predict tree phenology accurately under climate change conditions?

    Science.gov (United States)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2014-05-01

    or compromise dormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budburst dates only, with no information on the dormancy break date because this information is very scarce. We evaluated the efficiency of a set of process-based phenological models to accurately predict the dormancy break dates of four fruit trees. Our results show that models calibrated solely with flowering or budburst dates do not accurately predict the dormancy break date. Providing dormancy break date for the model parameterization results in much more accurate simulation of this latter, with however a higher error than that on flowering or bud break dates. But most importantly, we show also that models not calibrated with dormancy break dates can generate significant differences in forecasted flowering or bud break dates when using climate scenarios. Our results claim for the urgent need of massive measurements of dormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future.

  19. A multi-scale modeling framework for individualized, spatiotemporal prediction of drug effects and toxicological risk

    Directory of Open Access Journals (Sweden)

    Juan Guillermo eDiaz Ochoa

    2013-01-01

    Full Text Available In this study, we focus on a novel multi-scale modeling approach for spatiotemporal prediction of the distribution of substances and resulting hepatotoxicity by combining cellular models, a 2D liver model, and whole-body model. As a case study, we focused on predicting human hepatotoxicity upon treatment with acetaminophen based on in vitro toxicity data and potential inter-individual variability in gene expression and enzyme activities. By aggregating mechanistic, genome-based in silico cells to a novel 2D liver model and eventually to a whole body model, we predicted pharmacokinetic properties, metabolism, and the onset of hepatotoxicity in an in silico patient. Depending on the concentration of acetaminophen in the liver and the accumulation of toxic metabolites, cell integrity in the liver as a function of space and time as well as changes in the elimination rate of substances were estimated. We show that the variations in elimination rates also influence the distribution of acetaminophen and its metabolites in the whole body. Our results are in agreement with experimental results. What is more, the integrated model also predicted variations in drug toxicity depending on alterations of metabolic enzyme activities. Variations in enzyme activity, in turn, reflect genetic characteristics or diseases of individuals. In conclusion, this framework presents an important basis for efficiently integrating inter-individual variability data into models, paving the way for personalized or stratified predictions of drug toxicity and efficacy.

  20. Modeling the distribution of white spruce (Picea glauca) for Alaska with high accuracy: an open access role-model for predicting tree species in last remaining wilderness areas

    Science.gov (United States)

    Bettina Ohse; Falk Huettmann; Stefanie M. Ickert-Bond; Glenn P. Juday

    2009-01-01

    Most wilderness areas still lack accurate distribution information on tree species. We met this need with a predictive GIS modeling approach, using freely available digital data and computer programs to efficiently obtain high-quality species distribution maps. Here we present a digital map with the predicted distribution of white spruce (Picea glauca...

  1. Efficiency of metabolizable energy utilization for maintenance and gain and evaluation of Small Ruminant Nutrition System model in Santa Ines sheep

    Directory of Open Access Journals (Sweden)

    José Gilson Louzada Regadas Filho

    2011-11-01

    Full Text Available This study was carried out to estimate efficiencies of the utilization of metabolizable energy for maintenance (k m and weight gain (k g and to evaluate the Small Ruminant Nutrition System (SRNS model in predicting dry matter intake and average daily gain of growing Santa Ines sheep. Twenty-four non-castrated Santa Ines sheep, at 50 days of age and with average body weight of 13.00 ± 0.56 kg, respectively, were used. After a 10-day adaptation period, four animals were slaughtered to be used as reference for estimating initial empty body weight and body composition of the other animals. The remaining animals were distributed in a random block design, with the treatments consisting of diets containing different levels of metabolizable energy (2.08, 2.28, 2.47 and 2.69 Mcal/kg of DM, with five replicates. The metabolizable energy use efficiencies for maintenance and for weight gain were calculated from the relationship between the dietary net energy for maintenance and gain and ME concentration in the diets. Evaluation of the SRNS model was performed by adjustment of simple linear regression model between the predicted (independent variable and observed (dependent variable values. The estimated energy use efficiency for maintenance (k m was 0.70; and for gain weight (kg it showed to be inversely proportional to the increase of metabolizable energy concentration in the diet. The dry matter intake predicted by the SRNS model did not statistically differ from that observed, but the model overestimated the average daily gain by 5.18%. Those results can contribute to the construction of a database, which could be condensed into several others in a predictive model of performance and feed planning for sheep reared in Brazil.

  2. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  3. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  5. Nonlinear Fuzzy Model Predictive Control for a PWR Nuclear Power Plant

    Directory of Open Access Journals (Sweden)

    Xiangjie Liu

    2014-01-01

    Full Text Available Reliable power and temperature control in pressurized water reactor (PWR nuclear power plant is necessary to guarantee high efficiency and plant safety. Since the nuclear plants are quite nonlinear, the paper presents nonlinear fuzzy model predictive control (MPC, by incorporating the realistic constraints, to realize the plant optimization. T-S fuzzy modeling on nuclear power plant is utilized to approximate the nonlinear plant, based on which the nonlinear MPC controller is devised via parallel distributed compensation (PDC scheme in order to solve the nonlinear constraint optimization problem. Improved performance compared to the traditional PID controller for a TMI-type PWR is obtained in the simulation.

  6. Methodology for predicting market transformation due to implementation of energy efficiency standards and labels

    International Nuclear Information System (INIS)

    Mahlia, T.M.I.

    2004-01-01

    There are many papers that have been published on energy efficiency standards and labels. However, a very limited number of articles on the subject have discussed the transformation of appliance energy efficiency in the market after the programs are implemented. This paper is an attempt to investigate the market transformation due to implementation of minimum energy efficiency standards and energy labels. Even though the paper only investigates room air conditioners as a case study, the method is also applicable for predicting market transformation for other household electrical appliances

  7. A Model Predictive Control Approach for Fuel Economy Improvement of a Series Hydraulic Hybrid Vehicle

    Directory of Open Access Journals (Sweden)

    Tri-Vien Vu

    2014-10-01

    Full Text Available This study applied a model predictive control (MPC framework to solve the cruising control problem of a series hydraulic hybrid vehicle (SHHV. The controller not only regulates vehicle velocity, but also engine torque, engine speed, and accumulator pressure to their corresponding reference values. At each time step, a quadratic programming problem is solved within a predictive horizon to obtain the optimal control inputs. The objective is to minimize the output error. This approach ensures that the components operate at high efficiency thereby improving the total efficiency of the system. The proposed SHHV control system was evaluated under urban and highway driving conditions. By handling constraints and input-output interactions, the MPC-based control system ensures that the system operates safely and efficiently. The fuel economy of the proposed control scheme shows a noticeable improvement in comparison with the PID-based system, in which three Proportional-Integral-Derivative (PID controllers are used for cruising control.

  8. Structure model of energy efficiency indicators and applications

    International Nuclear Information System (INIS)

    Wu, Li-Ming; Chen, Bai-Sheng; Bor, Yun-Chang; Wu, Yin-Chin

    2007-01-01

    For the purposes of energy conservation and environmental protection, the government of Taiwan has instigated long-term policies to continuously encourage and assist industry in improving the efficiency of energy utilization. While multiple actions have led to practical energy saving to a limited extent, no strong evidence of improvement in energy efficiency was observed from the energy efficiency indicators (EEI) system, according to the annual national energy statistics and survey. A structural analysis of EEI is needed in order to understand the role that energy efficiency plays in the EEI system. This work uses the Taylor series expansion to develop a structure model for EEI at the level of the process sector of industry. The model is developed on the premise that the design parameters of the process are used in comparison with the operational parameters for energy differences. The utilization index of production capability and the variation index of energy utilization are formulated in the model to describe the differences between EEIs. Both qualitative and quantitative methods for the analysis of energy efficiency and energy savings are derived from the model. Through structural analysis, the model showed that, while the performance of EEI is proportional to the process utilization index of production capability, it is possible that energy may develop in a direction opposite to that of EEI. This helps to explain, at least in part, the inconsistency between EEI and energy savings. An energy-intensive steel plant in Taiwan was selected to show the application of the model. The energy utilization efficiency of the plant was evaluated and the amount of energy that had been saved or over-used in the production process was estimated. Some insights gained from the model outcomes are helpful to further enhance energy efficiency in the plant

  9. Gstat: a program for geostatistical modelling, prediction and simulation

    Science.gov (United States)

    Pebesma, Edzer J.; Wesseling, Cees G.

    1998-01-01

    Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.

  10. Predictive modelling for startup and investor relationship based on crowdfunding platform data

    Science.gov (United States)

    Alamsyah, Andry; Buono Asto Nugroho, Tri

    2018-03-01

    Crowdfunding platform is a place where startup shows off publicly their idea for the purpose to get their project funded. Crowdfunding platform such as Kickstarter are becoming popular today, it provides the efficient way for startup to get funded without liabilities, it also provides variety project category that can be participated. There is an available safety procedure to ensure achievable low-risk environment. The startup promoted project must accomplish their funded goal target. If they fail to reach the target, then there is no investment activity take place. It motivates startup to be more active to promote or disseminate their project idea and it also protect investor from losing money. The study objective is to predict the successfulness of proposed project and mapping investor trend using data mining framework. To achieve the objective, we proposed 3 models. First model is to predict whether a project is going to be successful or failed using K-Nearest Neighbour (KNN). Second model is to predict the number of successful project using Artificial Neural Network (ANN). Third model is to map the trend of investor in investing the project using K-Means clustering algorithm. KNN gives 99.04% model accuracy, while ANN best configuration gives 16-14-1 neuron layers and 0.2 learning rate, and K-Means gives 6 best separation clusters. The results of those models can help startup or investor to make decision regarding startup investment.

  11. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  12. A heuristic model for computational prediction of human branch point sequence.

    Science.gov (United States)

    Wen, Jia; Wang, Jue; Zhang, Qing; Guo, Dianjing

    2017-10-24

    Pre-mRNA splicing is the removal of introns from precursor mRNAs (pre-mRNAs) and the concurrent ligation of the flanking exons to generate mature mRNA. This process is catalyzed by the spliceosome, where the splicing factor 1 (SF1) specifically recognizes the seven-nucleotide branch point sequence (BPS) and the U2 snRNP later displaces the SF1 and binds to the BPS. In mammals, the degeneracy of BPS motifs together with the lack of a large set of experimentally verified BPSs complicates the task of BPS prediction in silico. In this paper, we develop a simple and yet efficient heuristic model for human BPS prediction based on a novel scoring scheme, which quantifies the splicing strength of putative BPSs. The candidate BPS is restricted exclusively within a defined BPS search region to avoid the influences of other elements in the intron and therefore the prediction accuracy is improved. Moreover, using two types of relative frequencies for human BPS prediction, we demonstrate our model outperformed other current implementations on experimentally verified human introns. We propose that the binding energy contributes to the molecular recognition involved in human pre-mRNA splicing. In addition, a genome-wide human BPS prediction is carried out. The characteristics of predicted BPSs are in accordance with experimentally verified human BPSs, and branch site positions relative to the 3'ss and the 5'end of the shortened AGEZ are consistent with the results of published papers. Meanwhile, a webserver for BPS predictor is freely available at http://biocomputer.bio.cuhk.edu.hk/BPS .

  13. An efficient link prediction index for complex military organization

    Science.gov (United States)

    Fan, Changjun; Liu, Zhong; Lu, Xin; Xiu, Baoxin; Chen, Qing

    2017-03-01

    Quality of information is crucial for decision-makers to judge the battlefield situations and design the best operation plans, however, real intelligence data are often incomplete and noisy, where missing links prediction methods and spurious links identification algorithms can be applied, if modeling the complex military organization as the complex network where nodes represent functional units and edges denote communication links. Traditional link prediction methods usually work well on homogeneous networks, but few for the heterogeneous ones. And the military network is a typical heterogeneous network, where there are different types of nodes and edges. In this paper, we proposed a combined link prediction index considering both the nodes' types effects and nodes' structural similarities, and demonstrated that it is remarkably superior to all the 25 existing similarity-based methods both in predicting missing links and identifying spurious links in a real military network data; we also investigated the algorithms' robustness under noisy environment, and found the mistaken information is more misleading than incomplete information in military areas, which is different from that in recommendation systems, and our method maintained the best performance under the condition of small noise. Since the real military network intelligence must be carefully checked at first due to its significance, and link prediction methods are just adopted to purify the network with the left latent noise, the method proposed here is applicable in real situations. In the end, as the FINC-E model, here used to describe the complex military organizations, is also suitable to many other social organizations, such as criminal networks, business organizations, etc., thus our method has its prospects in these areas for many tasks, like detecting the underground relationships between terrorists, predicting the potential business markets for decision-makers, and so on.

  14. Estimation of uncertainties in predictions of environmental transfer models: evaluation of methods and application to CHERPAC

    International Nuclear Information System (INIS)

    Koch, J.; Peterson, S-R.

    1995-10-01

    Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs

  15. Estimation of uncertainties in predictions of environmental transfer models: evaluation of methods and application to CHERPAC

    Energy Technology Data Exchange (ETDEWEB)

    Koch, J. [Israel Atomic Energy Commission, Yavne (Israel). Soreq Nuclear Research Center; Peterson, S-R.

    1995-10-01

    Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs.

  16. Finding Furfural Hydrogenation Catalysts via Predictive Modelling.

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-09-10

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.

  17. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...

  18. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction.

    Science.gov (United States)

    Miranian, A; Abdollahzade, M

    2013-02-01

    Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.

  19. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    Science.gov (United States)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  20. Predicting climate-induced range shifts: model differences and model reliability.

    Science.gov (United States)

    Joshua J. Lawler; Denis White; Ronald P. Neilson; Andrew R. Blaustein

    2006-01-01

    Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common data set, we investigated the potential implications of alternative modeling approaches for...

  1. Atlas : A library for numerical weather prediction and climate modelling

    Science.gov (United States)

    Deconinck, Willem; Bauer, Peter; Diamantakis, Michail; Hamrud, Mats; Kühnlein, Christian; Maciel, Pedro; Mengaldo, Gianmarco; Quintino, Tiago; Raoult, Baudouin; Smolarkiewicz, Piotr K.; Wedi, Nils P.

    2017-11-01

    The algorithms underlying numerical weather prediction (NWP) and climate models that have been developed in the past few decades face an increasing challenge caused by the paradigm shift imposed by hardware vendors towards more energy-efficient devices. In order to provide a sustainable path to exascale High Performance Computing (HPC), applications become increasingly restricted by energy consumption. As a result, the emerging diverse and complex hardware solutions have a large impact on the programming models traditionally used in NWP software, triggering a rethink of design choices for future massively parallel software frameworks. In this paper, we present Atlas, a new software library that is currently being developed at the European Centre for Medium-Range Weather Forecasts (ECMWF), with the scope of handling data structures required for NWP applications in a flexible and massively parallel way. Atlas provides a versatile framework for the future development of efficient NWP and climate applications on emerging HPC architectures. The applications range from full Earth system models, to specific tools required for post-processing weather forecast products. The Atlas library thus constitutes a step towards affordable exascale high-performance simulations by providing the necessary abstractions that facilitate the application in heterogeneous HPC environments by promoting the co-design of NWP algorithms with the underlying hardware.

  2. Predictive Modeling of a Paradigm Mechanical Cooling Tower Model: II. Optimal Best-Estimate Results with Reduced Predicted Uncertainties

    Directory of Open Access Journals (Sweden)

    Ruixian Fang

    2016-09-01

    Full Text Available This work uses the adjoint sensitivity model of the counter-flow cooling tower derived in the accompanying PART I to obtain the expressions and relative numerical rankings of the sensitivities, to all model parameters, of the following model responses: (i outlet air temperature; (ii outlet water temperature; (iii outlet water mass flow rate; and (iv air outlet relative humidity. These sensitivities are subsequently used within the “predictive modeling for coupled multi-physics systems” (PM_CMPS methodology to obtain explicit formulas for the predicted optimal nominal values for the model responses and parameters, along with reduced predicted standard deviations for the predicted model parameters and responses. These explicit formulas embody the assimilation of experimental data and the “calibration” of the model’s parameters. The results presented in this work demonstrate that the PM_CMPS methodology reduces the predicted standard deviations to values that are smaller than either the computed or the experimentally measured ones, even for responses (e.g., the outlet water flow rate for which no measurements are available. These improvements stem from the global characteristics of the PM_CMPS methodology, which combines all of the available information simultaneously in phase-space, as opposed to combining it sequentially, as in current data assimilation procedures.

  3. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  4. An integrated numerical model for the prediction of Gaussian and billet shapes

    International Nuclear Information System (INIS)

    Hattel, J.H.; Pryds, N.H.; Pedersen, T.B.

    2004-01-01

    Separate models for the atomisation and the deposition stages were recently integrated by the authors to form a unified model describing the entire spray-forming process. In the present paper, the focus is on describing the shape of the deposited material during the spray-forming process, obtained by this model. After a short review of the models and their coupling, the important factors which influence the resulting shape, i.e. Gaussian or billet, are addressed. The key parameters, which are utilized to predict the geometry and dimension of the deposited material, are the sticking efficiency and the shading effect for Gaussian and billet shape, respectively. From the obtained results, the effect of these parameters on the final shape is illustrated

  5. Neuro-fuzzy modelling of hydro unit efficiency

    International Nuclear Information System (INIS)

    Iliev, Atanas; Fushtikj, Vangel

    2003-01-01

    This paper presents neuro-fuzzy method for modeling of the hydro unit efficiency. The proposed method uses the characteristics of the fuzzy systems as universal function approximates, as well the abilities of the neural networks to adopt the parameters of the membership's functions and rules in the consequent part of the developed fuzzy system. Developed method is practically applied for modeling of the efficiency of unit which will be installed in the hydro power plant Kozjak. Comparison of the performance of the derived neuro-fuzzy method with several classical polynomials models is also performed. (Author)

  6. Identifying the Gene Signatures from Gene-Pathway Bipartite Network Guarantees the Robust Model Performance on Predicting the Cancer Prognosis

    Directory of Open Access Journals (Sweden)

    Li He

    2014-01-01

    Full Text Available For the purpose of improving the prediction of cancer prognosis in the clinical researches, various algorithms have been developed to construct the predictive models with the gene signatures detected by DNA microarrays. Due to the heterogeneity of the clinical samples, the list of differentially expressed genes (DEGs generated by the statistical methods or the machine learning algorithms often involves a number of false positive genes, which are not associated with the phenotypic differences between the compared clinical conditions, and subsequently impacts the reliability of the predictive models. In this study, we proposed a strategy, which combined the statistical algorithm with the gene-pathway bipartite networks, to generate the reliable lists of cancer-related DEGs and constructed the models by using support vector machine for predicting the prognosis of three types of cancers, namely, breast cancer, acute myeloma leukemia, and glioblastoma. Our results demonstrated that, combined with the gene-pathway bipartite networks, our proposed strategy can efficiently generate the reliable cancer-related DEG lists for constructing the predictive models. In addition, the model performance in the swap analysis was similar to that in the original analysis, indicating the robustness of the models in predicting the cancer outcomes.

  7. Analytical model for vibration prediction of two parallel tunnels in a full-space

    Science.gov (United States)

    He, Chao; Zhou, Shunhua; Guo, Peijun; Di, Honggui; Zhang, Xiaohui

    2018-06-01

    This paper presents a three-dimensional analytical model for the prediction of ground vibrations from two parallel tunnels embedded in a full-space. The two tunnels are modelled as cylindrical shells of infinite length, and the surrounding soil is modelled as a full-space with two cylindrical cavities. A virtual interface is introduced to divide the soil into the right layer and the left layer. By transforming the cylindrical waves into the plane waves, the solution of wave propagation in the full-space with two cylindrical cavities is obtained. The transformations from the plane waves to cylindrical waves are then used to satisfy the boundary conditions on the tunnel-soil interfaces. The proposed model provides a highly efficient tool to predict the ground vibration induced by the underground railway, which accounts for the dynamic interaction between neighbouring tunnels. Analysis of the vibration fields produced over a range of frequencies and soil properties is conducted. When the distance between the two tunnels is smaller than three times the tunnel diameter, the interaction between neighbouring tunnels is highly significant, at times in the order of 20 dB. It is necessary to consider the interaction between neighbouring tunnels for the prediction of ground vibrations induced underground railways.

  8. Model predictive Controller for Mobile Robot

    OpenAIRE

    Alireza Rezaee

    2017-01-01

    This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...

  9. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  10. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  11. A review of model predictive control: moving from linear to nonlinear design methods

    International Nuclear Information System (INIS)

    Nandong, J.; Samyudia, Y.; Tade, M.O.

    2006-01-01

    Linear model predictive control (LMPC) has now been considered as an industrial control standard in process industry. Its extension to nonlinear cases however has not yet gained wide acceptance due to many reasons, e.g. excessively heavy computational load and effort, thus, preventing its practical implementation in real-time control. The application of nonlinear MPC (NMPC) is advantageous for processes with strong nonlinearity or when the operating points are frequently moved from one set point to another due to, for instance, changes in market demands. Much effort has been dedicated towards improving the computational efficiency of NMPC as well as its stability analysis. This paper provides a review on alternative ways of extending linear MPC to the nonlinear one. We also highlight the critical issues pertinent to the applications of NMPC and discuss possible solutions to address these issues. In addition, we outline the future research trend in the area of model predictive control by emphasizing on the potential applications of multi-scale process model within NMPC

  12. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  13. A predictive thermal dynamic model for parameter generation in the laser assisted direct write process

    International Nuclear Information System (INIS)

    Shang Shuo; Fearon, Eamonn; Wellburn, Dan; Sato, Taku; Edwardson, Stuart; Dearden, G; Watkins, K G

    2011-01-01

    The laser assisted direct write (LADW) method can be used to generate electrical circuitry on a substrate by depositing metallic ink and curing the ink thermally by a laser. Laser curing has emerged over recent years as a novel yet efficient alternative to oven curing. This method can be used in situ, over complicated 3D contours of large parts (e.g. aircraft wings) and selectively cure over heat sensitive substrates, with little or no thermal damage. In previous studies, empirical methods have been used to generate processing windows for this technique, relating to the several interdependent processing parameters on which the curing quality and efficiency strongly depend. Incorrect parameters can result in a track that is cured in some areas and uncured in others, or in damaged substrates. This paper addresses the strong need for a quantitative model which can systematically output the processing conditions for a given combination of ink, substrate and laser source; transforming the LADW technique from a purely empirical approach, to a simple, repeatable, mathematically sound, efficient and predictable process. The method comprises a novel and generic finite element model (FEM) that for the first time predicts the evolution of the thermal profile of the ink track during laser curing and thus generates a parametric map which indicates the most suitable combination of parameters for process optimization. Experimental data are compared with simulation results to verify the accuracy of the model.

  14. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  15. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  16. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  17. Modeling of Failure Prediction Bayesian Network with Divide-and-Conquer Principle

    Directory of Open Access Journals (Sweden)

    Zhiqiang Cai

    2014-01-01

    Full Text Available For system failure prediction, automatically modeling from historical failure dataset is one of the challenges in practical engineering fields. In this paper, an effective algorithm is proposed to build the failure prediction Bayesian network (FPBN model with data mining technology. First, the conception of FPBN is introduced to describe the state of components and system and the cause-effect relationships among them. The types of network nodes, the directions of network edges, and the conditional probability distributions (CPDs of nodes in FPBN are discussed in detail. According to the characteristics of nodes and edges in FPBN, a divide-and-conquer principle based algorithm (FPBN-DC is introduced to build the best FPBN network structures of different types of nodes separately. Then, the CPDs of nodes in FPBN are calculated by the maximum likelihood estimation method based on the built network. Finally, a simulation study of a helicopter convertor model is carried out to demonstrate the application of FPBN-DC. According to the simulations results, the FPBN-DC algorithm can get better fitness value with the lower number of iterations, which verified its effectiveness and efficiency compared with traditional algorithm.

  18. Epidemiology of Mild Traumatic Brain Injury with Intracranial Hemorrhage: Focusing Predictive Models for Neurosurgical Intervention.

    Science.gov (United States)

    Orlando, Alessandro; Levy, A Stewart; Carrick, Matthew M; Tanner, Allen; Mains, Charles W; Bar-Or, David

    2017-11-01

    To outline differences in neurosurgical intervention (NI) rates between intracranial hemorrhage (ICH) types in mild traumatic brain injuries and help identify which ICH types are most likely to benefit from creation of predictive models for NI. A multicenter retrospective study of adult patients spanning 3 years at 4 U.S. trauma centers was performed. Patients were included if they presented with mild traumatic brain injury (Glasgow Coma Scale score 13-15) with head CT scan positive for ICH. Patients were excluded for skull fractures, "unspecified hemorrhage," or coagulopathy. Primary outcome was NI. Stepwise multivariable logistic regression models were built to analyze the independent association between ICH variables and outcome measures. The study comprised 1876 patients. NI rate was 6.7%. There was a significant difference in rate of NI by ICH type. Subdural hematomas had the highest rate of NI (15.5%) and accounted for 78% of all NIs. Isolated subarachnoid hemorrhages had the lowest, nonzero, NI rate (0.19%). Logistic regression models identified ICH type as the most influential independent variable when examining NI. A model predicting NI for isolated subarachnoid hemorrhages would require 26,928 patients, but a model predicting NI for isolated subdural hematomas would require only 328 patients. This study highlighted disparate NI rates among ICH types in patients with mild traumatic brain injury and identified mild, isolated subdural hematomas as most appropriate for construction of predictive NI models. Increased health care efficiency will be driven by accurate understanding of risk, which can come only from accurate predictive models. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction

    International Nuclear Information System (INIS)

    Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.

    2015-01-01

    An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks

  20. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  1. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  2. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  3. DDR: Efficient computational method to predict drug–target interactions using graph mining and machine learning approaches

    KAUST Repository

    Olayan, Rawan S.

    2017-11-23

    Motivation Finding computationally drug-target interactions (DTIs) is a convenient strategy to identify new DTIs at low cost with reasonable accuracy. However, the current DTI prediction methods suffer the high false positive prediction rate. Results We developed DDR, a novel method that improves the DTI prediction accuracy. DDR is based on the use of a heterogeneous graph that contains known DTIs with multiple similarities between drugs and multiple similarities between target proteins. DDR applies non-linear similarity fusion method to combine different similarities. Before fusion, DDR performs a pre-processing step where a subset of similarities is selected in a heuristic process to obtain an optimized combination of similarities. Then, DDR applies a random forest model using different graph-based features extracted from the DTI heterogeneous graph. Using five repeats of 10-fold cross-validation, three testing setups, and the weighted average of area under the precision-recall curve (AUPR) scores, we show that DDR significantly reduces the AUPR score error relative to the next best start-of-the-art method for predicting DTIs by 34% when the drugs are new, by 23% when targets are new, and by 34% when the drugs and the targets are known but not all DTIs between them are not known. Using independent sources of evidence, we verify as correct 22 out of the top 25 DDR novel predictions. This suggests that DDR can be used as an efficient method to identify correct DTIs.

  4. Prediction Model of Photovoltaic Module Temperature for Power Performance of Floating PVs

    Directory of Open Access Journals (Sweden)

    Waithiru Charles Lawrence Kamuyu

    2018-02-01

    Full Text Available Rapid reduction in the price of photovoltaic (solar PV cells and modules has resulted in a rapid increase in solar system deployments to an annual expected capacity of 200 GW by 2020. Achieving high PV cell and module efficiency is necessary for many solar manufacturers to break even. In addition, new innovative installation methods are emerging to complement the drive to lower $/W PV system price. The floating PV (FPV solar market space has emerged as a method for utilizing the cool ambient environment of the FPV system near the water surface based on successful FPV module (FPVM reliability studies that showed degradation rates below 0.5% p.a. with new encapsulation material. PV module temperature analysis is another critical area, governing the efficiency performance of solar cells and module. In this paper, data collected over five-minute intervals from a PV system over a year is analyzed. We use MATLAB to derived equation coefficients of predictable environmental variables to derive FPVM’s first module temperature operation models. When comparing the theoretical prediction to real field PV module operation temperature, the corresponding model errors range between 2% and 4% depending on number of equation coefficients incorporated. This study is useful in validation results of other studies that show FPV systems producing 10% more energy than other land based systems.

  5. Tailored high-resolution numerical weather forecasts for energy efficient predictive building control

    Science.gov (United States)

    Stauch, V. J.; Gwerder, M.; Gyalistras, D.; Oldewurtel, F.; Schubiger, F.; Steiner, P.

    2010-09-01

    The high proportion of the total primary energy consumption by buildings has increased the public interest in the optimisation of buildings' operation and is also driving the development of novel control approaches for the indoor climate. In this context, the use of weather forecasts presents an interesting and - thanks to advances in information and predictive control technologies and the continuous improvement of numerical weather prediction (NWP) models - an increasingly attractive option for improved building control. Within the research project OptiControl (www.opticontrol.ethz.ch) predictive control strategies for a wide range of buildings, heating, ventilation and air conditioning (HVAC) systems, and representative locations in Europe are being investigated with the aid of newly developed modelling and simulation tools. Grid point predictions for radiation, temperature and humidity of the high-resolution limited area NWP model COSMO-7 (see www.cosmo-model.org) and local measurements are used as disturbances and inputs into the building system. The control task considered consists in minimizing energy consumption whilst maintaining occupant comfort. In this presentation, we use the simulation-based OptiControl methodology to investigate the impact of COSMO-7 forecasts on the performance of predictive building control and the resulting energy savings. For this, we have selected building cases that were shown to benefit from a prediction horizon of up to 3 days and therefore, are particularly suitable for the use of numerical weather forecasts. We show that the controller performance is sensitive to the quality of the weather predictions, most importantly of the incident radiation on differently oriented façades. However, radiation is characterised by a high temporal and spatial variability in part caused by small scale and fast changing cloud formation and dissolution processes being only partially represented in the COSMO-7 grid point predictions. On the

  6. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  7. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  8. A nonlinear efficient layerwise finite element model for smart piezolaminated composites under strong applied electric field

    International Nuclear Information System (INIS)

    Kapuria, S; Yaqoob Yasin, M

    2013-01-01

    In this work, we present an electromechanically coupled efficient layerwise finite element model for the static response of piezoelectric laminated composite and sandwich plates, considering the nonlinear behavior of piezoelectric materials under strong electric field. The nonlinear model is developed consistently using a variational principle, considering a rotationally invariant second order nonlinear constitutive relationship, and full electromechanical coupling. In the piezoelectric layer, the electric potential is approximated to have a quadratic variation across the thickness, as observed from exact three dimensional solutions, and the equipotential condition of electroded piezoelectric surfaces is modeled using the novel concept of an electric node. The results predicted by the nonlinear model compare very well with the experimental data available in the literature. The effect of the piezoelectric nonlinearity on the static response and deflection/stress control is studied for piezoelectric bimorph as well as hybrid laminated plates with isotropic, angle-ply composite and sandwich substrates. For high electric fields, the difference between the nonlinear and linear predictions is large, and cannot be neglected. The error in the prediction of the smeared counterpart of the present theory with the same number of primary displacement unknowns is also examined. (paper)

  9. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  10. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  11. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  12. Efficient Iris Localization via Optimization Model

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2017-01-01

    Full Text Available Iris localization is one of the most important processes in iris recognition. Because of different kinds of noises in iris image, the localization result may be wrong. Besides this, localization process is time-consuming. To solve these problems, this paper develops an efficient iris localization algorithm via optimization model. Firstly, the localization problem is modeled by an optimization model. Then SIFT feature is selected to represent the characteristic information of iris outer boundary and eyelid for localization. And SDM (Supervised Descent Method algorithm is employed to solve the final points of outer boundary and eyelids. Finally, IRLS (Iterative Reweighted Least-Square is used to obtain the parameters of outer boundary and upper and lower eyelids. Experimental result indicates that the proposed algorithm is efficient and effective.

  13. PREDICTING THE EFFECTIVENESS OF WEB INFORMATION SYSTEMS USING NEURAL NETWORKS MODELING: FRAMEWORK & EMPIRICAL TESTING

    Directory of Open Access Journals (Sweden)

    Dr. Kamal Mohammed Alhendawi

    2018-02-01

    Full Text Available The information systems (IS assessment studies have still used the commonly traditional tools such as questionnaires in evaluating the dependent variables and specially effectiveness of systems. Artificial neural networks have been recently accepted as an effective alternative tool for modeling the complicated systems and widely used for forecasting. A very few is known about the employment of Artificial Neural Network (ANN in the prediction IS effectiveness. For this reason, this study is considered as one of the fewest studies to investigate the efficiency and capability of using ANN for forecasting the user perceptions towards IS effectiveness where MATLAB is utilized for building and training the neural network model. A dataset of 175 subjects collected from international organization are utilized for ANN learning where each subject consists of 6 features (5 quality factors as inputs and one Boolean output. A percentage of 75% o subjects are used in the training phase. The results indicate an evidence on the ANN models has a reasonable accuracy in forecasting the IS effectiveness. For prediction, ANN with PURELIN (ANNP and ANN with TANSIG (ANNTS transfer functions are used. It is found that both two models have a reasonable prediction, however, the accuracy of ANNTS model is better than ANNP model (88.6% and 70.4% respectively. As the study proposes a new model for predicting IS dependent variables, it could save the considerably high cost that might be spent in sample data collection in the quantitative studies in the fields science, management, education, arts and others.

  14. Statistical modelling for ship propulsion efficiency

    DEFF Research Database (Denmark)

    Petersen, Jóan Petur; Jacobsen, Daniel J.; Winther, Ole

    2012-01-01

    This paper presents a state-of-the-art systems approach to statistical modelling of fuel efficiency in ship propulsion, and also a novel and publicly available data set of high quality sensory data. Two statistical model approaches are investigated and compared: artificial neural networks...

  15. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  16. Sweat loss prediction using a multi-model approach.

    Science.gov (United States)

    Xu, Xiaojiang; Santee, William R

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  17. A computational model of pattern separation efficiency in the dentate gyrus with implications in schizophrenia

    Science.gov (United States)

    Faghihi, Faramarz; Moustafa, Ahmed A.

    2015-01-01

    Information processing in the hippocampus begins by transferring spiking activity of the entorhinal cortex (EC) into the dentate gyrus (DG). Activity pattern in the EC is separated by the DG such that it plays an important role in hippocampal functions including memory. The structural and physiological parameters of these neural networks enable the hippocampus to be efficient in encoding a large number of inputs that animals receive and process in their life time. The neural encoding capacity of the DG depends on its single neurons encoding and pattern separation efficiency. In this study, encoding by the DG is modeled such that single neurons and pattern separation efficiency are measured using simulations of different parameter values. For this purpose, a probabilistic model of single neurons efficiency is presented to study the role of structural and physiological parameters. Known neurons number of the EC and the DG is used to construct a neural network by electrophysiological features of granule cells of the DG. Separated inputs as activated neurons in the EC with different firing probabilities are presented into the DG. For different connectivity rates between the EC and DG, pattern separation efficiency of the DG is measured. The results show that in the absence of feedback inhibition on the DG neurons, the DG demonstrates low separation efficiency and high firing frequency. Feedback inhibition can increase separation efficiency while resulting in very low single neuron’s encoding efficiency in the DG and very low firing frequency of neurons in the DG (sparse spiking). This work presents a mechanistic explanation for experimental observations in the hippocampus, in combination with theoretical measures. Moreover, the model predicts a critical role for impaired inhibitory neurons in schizophrenia where deficiency in pattern separation of the DG has been observed. PMID:25859189

  18. A Computational Model of Pattern Separation Efficiency in the Dentate Gyrus with Implications in Schizophrenia

    Directory of Open Access Journals (Sweden)

    Faramarz eFaghihi

    2015-03-01

    Full Text Available Information processing in the hippocampus begins by transferring spiking activity of the Entorhinal Cortex (EC into the Dentate Gyrus (DG. Activity pattern in the EC is separated by the DG such that it plays an important role in hippocampal functions including memory. The structural and physiological parameters of these neural networks enable the hippocampus to be efficient in encoding a large number of inputs that animals receive and process in their life time. The neural encoding capacity of the DG depends on its single neurons encoding and pattern separation efficiency. In this study, encoding by the DG is modelled such that single neurons and pattern separation efficiency are measured using simulations of different parameter values. For this purpose, a probabilistic model of single neurons efficiency is presented to study the role of structural and physiological parameters. Known neurons number of the EC and the DG is used to construct a neural network by electrophysiological features of neuron in the DG. Separated inputs as activated neurons in the EC with different firing probabilities are presented into the DG. For different connectivity rates between the EC and DG, pattern separation efficiency of the DG is measured. The results show that in the absence of feedback inhibition on the DG neurons, the DG demonstrates low separation efficiency and high firing frequency. Feedback inhibition can increase separation efficiency while resulting in very low single neuron’s encoding efficiency in the DG and very low firing frequency of neurons in the DG (sparse spiking. This work presents a mechanistic explanation for experimental observations in the hippocampus, in combination with theoretical measures. Moreover, the model predicts a critical role for impaired inhibitory neurons in schizophrenia where deficiency in pattern separation of the DG has been observed.

  19. A computationally efficient method for full-core conjugate heat transfer modeling of sodium fast reactors

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Rui, E-mail: rhu@anl.gov; Yu, Yiqi

    2016-11-15

    Highlights: • Developed a computationally efficient method for full-core conjugate heat transfer modeling of sodium fast reactors. • Applied fully-coupled JFNK solution scheme to avoid the operator-splitting errors. • The accuracy and efficiency of the method is confirmed with a 7-assembly test problem. • The effects of different spatial discretization schemes are investigated and compared to the RANS-based CFD simulations. - Abstract: For efficient and accurate temperature predictions of sodium fast reactor structures, a 3-D full-core conjugate heat transfer modeling capability is developed for an advanced system analysis tool, SAM. The hexagon lattice core is modeled with 1-D parallel channels representing the subassembly flow, and 2-D duct walls and inter-assembly gaps. The six sides of the hexagon duct wall and near-wall coolant region are modeled separately to account for different temperatures and heat transfer between coolant flow and each side of the duct wall. The Jacobian Free Newton Krylov (JFNK) solution method is applied to solve the fluid and solid field simultaneously in a fully coupled fashion. The 3-D full-core conjugate heat transfer modeling capability in SAM has been demonstrated by a verification test problem with 7 fuel assemblies in a hexagon lattice layout. Additionally, the SAM simulation results are compared with RANS-based CFD simulations. Very good agreements have been achieved between the results of the two approaches.

  20. Data-Reconciliation Based Fault-Tolerant Model Predictive Control for a Biomass Boiler

    Directory of Open Access Journals (Sweden)

    Palash Sarkar

    2017-02-01

    Full Text Available This paper presents a novel, effective method to handle critical sensor faults affecting a control system devised to operate a biomass boiler. In particular, the proposed method consists of integrating a data reconciliation algorithm in a model predictive control loop, so as to annihilate the effects of faults occurring in the sensor of the flue gas oxygen concentration, by feeding the controller with the reconciled measurements. Indeed, the oxygen content in flue gas is a key variable in control of biomass boilers due its close connections with both combustion efficiency and polluting emissions. The main benefit of including the data reconciliation algorithm in the loop, as a fault tolerant component, with respect to applying standard fault tolerant methods, is that controller reconfiguration is not required anymore, since the original controller operates on the restored, reliable data. The integrated data reconciliation–model predictive control (MPC strategy has been validated by running simulations on a specific type of biomass boiler—the KPA Unicon BioGrate boiler.

  1. Evaluating Energy Efficiency Policies with Energy-Economy Models

    Energy Technology Data Exchange (ETDEWEB)

    Mundaca, Luis; Neij, Lena; Worrell, Ernst; McNeil, Michael A.

    2010-08-01

    The growing complexities of energy systems, environmental problems and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyse bottom-up energy-economy models and corresponding evaluation studies on energy efficiency policies to induce technological change. We use the household sector as a case study. Our analysis focuses on decision frameworks for technology choice, type of evaluation being carried out, treatment of market and behavioural failures, evaluated policy instruments, and key determinants used to mimic policy instruments. Although the review confirms criticism related to energy-economy models (e.g. unrealistic representation of decision-making by consumers when choosing technologies), they provide valuable guidance for policy evaluation related to energy efficiency. Different areas to further advance models remain open, particularly related to modelling issues, techno-economic and environmental aspects, behavioural determinants, and policy considerations.

  2. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  3. Alcator C-Mod predictive modeling

    International Nuclear Information System (INIS)

    Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas

    2001-01-01

    Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles

  4. Rigid-beam model of a high-efficiency magnicon

    International Nuclear Information System (INIS)

    Rees, D.E.; Tallerico, P.J.; Humphries, S.J. Jr.

    1993-01-01

    The magnicon is a new type of high-efficiency deflection-modulated amplifier developed at the Institute of Nuclear Physics in Novosibirsk, Russia. The prototype pulsed magnicon achieved an output power of 2.4 MW and an efficiency of 73% at 915 MHz. This paper presents the results of a rigid-beam model for a 700-MHz, 2.5-MW 82%-efficient magnicon. The rigid-beam model allows for characterization of the beam dynamics by tracking only a single electron. The magnicon design presented consists of a drive cavity; passive cavities; a pi-mode, coupled-deflection cavity; and an output cavity. It represents an optimized design. The model is fully self-consistent, and this paper presents the details of the model and calculated performance of a 2.5-MW magnicon

  5. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  6. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Modeling and energy efficiency optimization of belt conveyors

    International Nuclear Information System (INIS)

    Zhang, Shirong; Xia, Xiaohua

    2011-01-01

    Highlights: → We take optimization approach to improve operation efficiency of belt conveyors. → An analytical energy model, originating from ISO 5048, is proposed. → Then an off-line and an on-line parameter estimation schemes are investigated. → In a case study, six optimization problems are formulated with solutions in simulation. - Abstract: The improvement of the energy efficiency of belt conveyor systems can be achieved at equipment and operation levels. Specifically, variable speed control, an equipment level intervention, is recommended to improve operation efficiency of belt conveyors. However, the current implementations mostly focus on lower level control loops without operational considerations at the system level. This paper intends to take a model based optimization approach to improve the efficiency of belt conveyors at the operational level. An analytical energy model, originating from ISO 5048, is firstly proposed, which lumps all the parameters into four coefficients. Subsequently, both an off-line and an on-line parameter estimation schemes are applied to identify the new energy model, respectively. Simulation results are presented for the estimates of the four coefficients. Finally, optimization is done to achieve the best operation efficiency of belt conveyors under various constraints. Six optimization problems of a typical belt conveyor system are formulated, respectively, with solutions in simulation for a case study.

  8. Heat Transfer Characteristics and Prediction Model of Supercritical Carbon Dioxide (SC-CO2 in a Vertical Tube

    Directory of Open Access Journals (Sweden)

    Can Cai

    2017-11-01

    Full Text Available Due to its distinct capability to improve the efficiency of shale gas production, supercritical carbon dioxide (SC-CO2 fracturing has attracted increased attention in recent years. Heat transfer occurs in the transportation and fracture processes. To better predict and understand the heat transfer of SC-CO2 near the critical region, numerical simulations focusing on a vertical flow pipe were performed. Various turbulence models and turbulent Prandtl numbers (Prt were evaluated to capture the heat transfer deterioration (HTD. The simulations show that the turbulent Prandtl number model (TWL model combined with the Shear Stress Transport (SST k-ω turbulence model accurately predicts the HTD in the critical region. It was found that Prt has a strong effect on the heat transfer prediction. The HTD occurred under larger heat flux density conditions, and an acceleration process was observed. Gravity also affects the HTD through the linkage of buoyancy, and HTD did not occur under zero-gravity conditions.

  9. Multi-omics facilitated variable selection in Cox-regression model for cancer prognosis prediction.

    Science.gov (United States)

    Liu, Cong; Wang, Xujun; Genchev, Georgi Z; Lu, Hui

    2017-07-15

    New developments in high-throughput genomic technologies have enabled the measurement of diverse types of omics biomarkers in a cost-efficient and clinically-feasible manner. Developing computational methods and tools for analysis and translation of such genomic data into clinically-relevant information is an ongoing and active area of investigation. For example, several studies have utilized an unsupervised learning framework to cluster patients by integrating omics data. Despite such recent advances, predicting cancer prognosis using integrated omics biomarkers remains a challenge. There is also a shortage of computational tools for predicting cancer prognosis by using supervised learning methods. The current standard approach is to fit a Cox regression model by concatenating the different types of omics data in a linear manner, while penalty could be added for feature selection. A more powerful approach, however, would be to incorporate data by considering relationships among omics datatypes. Here we developed two methods: a SKI-Cox method and a wLASSO-Cox method to incorporate the association among different types of omics data. Both methods fit the Cox proportional hazards model and predict a risk score based on mRNA expression profiles. SKI-Cox borrows the information generated by these additional types of omics data to guide variable selection, while wLASSO-Cox incorporates this information as a penalty factor during model fitting. We show that SKI-Cox and wLASSO-Cox models select more true variables than a LASSO-Cox model in simulation studies. We assess the performance of SKI-Cox and wLASSO-Cox using TCGA glioblastoma multiforme and lung adenocarcinoma data. In each case, mRNA expression, methylation, and copy number variation data are integrated to predict the overall survival time of cancer patients. Our methods achieve better performance in predicting patients' survival in glioblastoma and lung adenocarcinoma. Copyright © 2017. Published by Elsevier

  10. Towards social autonomous vehicles: Efficient collision avoidance scheme using Richardson's arms race model.

    Science.gov (United States)

    Riaz, Faisal; Niazi, Muaz A

    2017-01-01

    This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.

  11. Towards social autonomous vehicles: Efficient collision avoidance scheme using Richardson's arms race model.

    Directory of Open Access Journals (Sweden)

    Faisal Riaz

    Full Text Available This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs, which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM level of the Cognitive Agent Based Computing (CABC framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.

  12. Predictive Modelling of Heavy Metals in Urban Lakes

    OpenAIRE

    Lindström, Martin

    2000-01-01

    Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes. Sediment and water data for this study were collected from ten small lakes in the ...

  13. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  14. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. An efficient iterative model reduction method for aeroviscoelastic panel flutter analysis in the supersonic regime

    Science.gov (United States)

    Cunha-Filho, A. G.; Briend, Y. P. J.; de Lima, A. M. G.; Donadon, M. V.

    2018-05-01

    The flutter boundary prediction of complex aeroelastic systems is not an easy task. In some cases, these analyses may become prohibitive due to the high computational cost and time associated with the large number of degrees of freedom of the aeroelastic models, particularly when the aeroelastic model incorporates a control strategy with the aim of suppressing the flutter phenomenon, such as the use of viscoelastic treatments. In this situation, the use of a model reduction method is essential. However, the construction of a modal reduction basis for aeroviscoelastic systems is still a challenge, owing to the inherent frequency- and temperature-dependent behavior of the viscoelastic materials. Thus, the main contribution intended for the present study is to propose an efficient and accurate iterative enriched Ritz basis to deal with aeroviscoelastic systems. The main features and capabilities of the proposed model reduction method are illustrated in the prediction of flutter boundary for a thin three-layer sandwich flat panel and a typical aeronautical stiffened panel, both under supersonic flow.

  16. Energy efficiency resource modeling in generation expansion planning

    International Nuclear Information System (INIS)

    Ghaderi, A.; Parsa Moghaddam, M.; Sheikh-El-Eslami, M.K.

    2014-01-01

    Energy efficiency plays an important role in mitigating energy security risks and emission problems. In this paper, energy efficiency resources are modeled as efficiency power plants (EPP) to evaluate their impacts on generation expansion planning (GEP). The supply curve of EPP is proposed using the production function of electricity consumption. A decision making framework is also presented to include EPP in GEP problem from an investor's point of view. The revenue of EPP investor is obtained from energy cost reduction of consumers and does not earn any income from electricity market. In each stage of GEP, a bi-level model for operation problem is suggested: the upper-level represents profit maximization of EPP investor and the lower-level corresponds to maximize the social welfare. To solve the bi-level problem, a fixed-point iteration algorithm is used known as diagonalization method. Energy efficiency feed-in tariff is investigated as a regulatory support scheme to encourage the investor. Results pertaining to a case study are simulated and discussed. - Highlights: • An economic model for energy efficiency programs is presented. • A framework is provided to model energy efficiency resources in GEP problem. • FIT is investigated as a regulatory support scheme to encourage the EPP investor. • The capacity expansion is delayed and reduced with considering EPP in GEP. • FIT-II can more effectively increase the energy saving compared to FIT-I

  17. Predictability of Exchange Rates in Sri Lanka: A Test of the Efficient Market Hypothesis

    OpenAIRE

    Guneratne B Wickremasinghe

    2007-01-01

    This study examined the validity of the weak and semi-strong forms of the efficient market hypothesis (EMH) for the foreign exchange market of Sri Lanka. Monthly exchange rates for four currencies during the floating exchange rate regime were used in the empirical tests. Using a battery of tests, empirical results indicate that the current values of the four exchange rates can be predicted from their past values. Further, the tests of semi-strong form efficiency indicate that exchange rate pa...

  18. Predicting Statistical Response and Extreme Events in Uncertainty Quantification through Reduced-Order Models

    Science.gov (United States)

    Qi, D.; Majda, A.

    2017-12-01

    A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with

  19. A multivariate model for predicting segmental body composition.

    Science.gov (United States)

    Tian, Simiao; Mioche, Laurence; Denis, Jean-Baptiste; Morio, Béatrice

    2013-12-01

    The aims of the present study were to propose a multivariate model for predicting simultaneously body, trunk and appendicular fat and lean masses from easily measured variables and to compare its predictive capacity with that of the available univariate models that predict body fat percentage (BF%). The dual-energy X-ray absorptiometry (DXA) dataset (52% men and 48% women) with White, Black and Hispanic ethnicities (1999-2004, National Health and Nutrition Examination Survey) was randomly divided into three sub-datasets: a training dataset (TRD), a test dataset (TED); a validation dataset (VAD), comprising 3835, 1917 and 1917 subjects. For each sex, several multivariate prediction models were fitted from the TRD using age, weight, height and possibly waist circumference. The most accurate model was selected from the TED and then applied to the VAD and a French DXA dataset (French DB) (526 men and 529 women) to assess the prediction accuracy in comparison with that of five published univariate models, for which adjusted formulas were re-estimated using the TRD. Waist circumference was found to improve the prediction accuracy, especially in men. For BF%, the standard error of prediction (SEP) values were 3.26 (3.75) % for men and 3.47 (3.95)% for women in the VAD (French DB), as good as those of the adjusted univariate models. Moreover, the SEP values for the prediction of body and appendicular lean masses ranged from 1.39 to 2.75 kg for both the sexes. The prediction accuracy was best for age < 65 years, BMI < 30 kg/m2 and the Hispanic ethnicity. The application of our multivariate model to large populations could be useful to address various public health issues.

  20. Ionization efficiency calculations for cavity thermoionization ion source

    International Nuclear Information System (INIS)

    Turek, M.; Pyszniak, K.; Drozdziel, A.; Sielanko, J.; Maczka, D.; Yuskevich, Yu.V.; Vaganov, Yu.A.

    2009-01-01

    The numerical model of ionization in a thermoionization ion source is presented. The review of ion source ionization efficiency calculation results for various kinds of extraction field is given. The dependence of ionization efficiency on working parameters like ionizer length and extraction voltage is discussed. Numerical simulations results are compared to theoretical predictions obtained from a simplified ionization model

  1. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2018-01-01

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  3. Development of a modified equilibrium model for biomass pilot-scale fluidized bed gasifier performance predictions

    International Nuclear Information System (INIS)

    Rodriguez-Alejandro, David A.; Nam, Hyungseok; Maglinao, Amado L.; Capareda, Sergio C.; Aguilera-Alvarado, Alberto F.

    2016-01-01

    The objective of this work is to develop a thermodynamic model considering non-stoichiometric restrictions. The model validation was done from experimental works using a bench-scale fluidized bed gasifier with wood chips, dairy manure, and sorghum. The model was used for a further parametric study to predict the performance of a pilot-scale fluidized biomass gasifier. The Gibbs free energy minimization was applied to the modified equilibrium model considering a heat loss to the surroundings, carbon efficiency, and two non-equilibrium factors based on empirical correlations of ER and gasification temperature. The model was in a good agreement with RMS <4 for the produced gas. The parametric study ranges were 0.01 < ER < 0.99 and 500 °C < T < 900 °C to predict syngas concentrations and its LHV (lower heating value) for the optimization. Higher aromatics in tar were contained in WC gasification compared to manure gasification. A wood gasification tar simulation was produced to predict the amount of tars at specific conditions. The operating conditions for the highest quality syngas were reconciled experimentally with three biomass wastes using a fluidized bed gasifier. The thermodynamic model was used to predict the gasification performance at conditions beyond the actual operation. - Highlights: • Syngas from experimental gasification was used to create a non-equilibrium model. • Different types of biomass (HTS, DM, and WC) were used for gasification modelling. • Different tar compositions were identified with a simulation of tar yields. • The optimum operating conditions were found through the developed model.

  4. Predictive model of nicotine dependence based on mental health indicators and self-concept

    Directory of Open Access Journals (Sweden)

    Hamid Kazemi Zahrani

    2014-12-01

    Full Text Available Background: The purpose of this research was to investigate the predictive power of anxiety, depression, stress and self-concept dimensions (Mental ability, job efficiency, physical attractiveness, social skills, and deficiencies and merits as predictors of nicotine dependency among university students in Isfahan. Methods: In this correlational study, 110 male nicotine-dependent students at Isfahan University were selected by convenience sampling. All samples were assessed by Depression Anxiety Stress Scale (DASS, self-concept test and Nicotine Dependence Syndrome Scale. Data were analyzed by Pearson correlation and stepwise regression. Results: The result showed that anxiety had the highest strength to predict nicotine dependence. In addition, the self-concept and its dimensions predicted only 12% of the variance in nicotine dependence, which was not significant. Conclusion: Emotional processing variables involved in mental health play an important role in presenting a model to predict students’ dependence on nicotine more than identity variables such as different dimensions of self-concept.

  5. A study of modelling simplifications in ground vibration predictions for railway traffic at grade

    Science.gov (United States)

    Germonpré, M.; Degrande, G.; Lombaert, G.

    2017-10-01

    Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.

  6. Dynamic Simulation of Human Gait Model With Predictive Capability.

    Science.gov (United States)

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  7. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  8. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Prediction of Hydraulic Performance of a Scaled-Down Model of SMART Reactor Coolant Pump

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Sun Guk; Park, Jin Seok; Yu, Je Yong; Lee, Won Jae [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-08-15

    An analysis was conducted to predict the hydraulic performance of a reactor coolant pump (RCP) of SMART at the off-design as well as design points. In order to reduce the analysis time efficiently, a single passage containing an impeller and a diffuser was considered as the computational domain. A stage scheme was used to perform a circumferential averaging of the flux on the impeller-diffuser interface. The pressure difference between the inlet and outlet of the pump was determined and was used to compute the head, efficiency, and break horse power (BHP) of a scaled-down model under conditions of steady-state incompressible flow. The predicted curves of the hydraulic performance of an RCP were similar to the typical characteristic curves of a conventional mixed-flow pump. The complex internal fluid flow of a pump, including the internal recirculation loss due to reverse flow, was observed at a low flow rate.

  10. Application of GIS based data driven evidential belief function model to predict groundwater potential zonation

    Science.gov (United States)

    Nampak, Haleh; Pradhan, Biswajeet; Manap, Mohammad Abd

    2014-05-01

    The objective of this paper is to exploit potential application of an evidential belief function (EBF) model for spatial prediction of groundwater productivity at Langat basin area, Malaysia using geographic information system (GIS) technique. About 125 groundwater yield data were collected from well locations. Subsequently, the groundwater yield was divided into high (⩾11 m3/h) and low yields (divided into a testing dataset 70% (42 wells) for training the model and the remaining 30% (18 wells) was used for validation purpose. To perform cross validation, the frequency ratio (FR) approach was applied into remaining groundwater wells with low yield to show the spatial correlation between the low potential zones of groundwater productivity. A total of twelve groundwater conditioning factors that affect the storage of groundwater occurrences were derived from various data sources such as satellite based imagery, topographic maps and associated database. Those twelve groundwater conditioning factors are elevation, slope, curvature, stream power index (SPI), topographic wetness index (TWI), drainage density, lithology, lineament density, land use, normalized difference vegetation index (NDVI), soil and rainfall. Subsequently, the Dempster-Shafer theory of evidence model was applied to prepare the groundwater potential map. Finally, the result of groundwater potential map derived from belief map was validated using testing data. Furthermore, to compare the performance of the EBF result, logistic regression model was applied. The success-rate and prediction-rate curves were computed to estimate the efficiency of the employed EBF model compared to LR method. The validation results demonstrated that the success-rate for EBF and LR methods were 83% and 82% respectively. The area under the curve for prediction-rate of EBF and LR methods were calculated 78% and 72% respectively. The outputs achieved from the current research proved the efficiency of EBF in groundwater

  11. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  12. Prediction of resource volumes at untested locations using simple local prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  13. Optoelectronic engineering of colloidal quantum-dot solar cells beyond the efficiency black hole: a modeling approach

    Science.gov (United States)

    Mahpeykar, Seyed Milad; Wang, Xihua

    2017-02-01

    Colloidal quantum dot (CQD) solar cells have been under the spotlight in recent years mainly due to their potential for low-cost solution-processed fabrication and efficient light harvesting through multiple exciton generation (MEG) and tunable absorption spectrum via the quantum size effect. Despite the impressive advances achieved in charge carrier mobility of quantum dot solids and the cells' light trapping capabilities, the recent progress in CQD solar cell efficiencies has been slow, leaving them behind other competing solar cell technologies. In this work, using comprehensive optoelectronic modeling and simulation, we demonstrate the presence of a strong efficiency loss mechanism, here called the "efficiency black hole", that can significantly hold back the improvements achieved by any efficiency enhancement strategy. We prove that this efficiency black hole is the result of sole focus on enhancement of either light absorption or charge extraction capabilities of CQD solar cells. This means that for a given thickness of CQD layer, improvements accomplished exclusively in optic or electronic aspect of CQD solar cells do not necessarily translate into tangible enhancement in their efficiency. The results suggest that in order for CQD solar cells to come out of the mentioned black hole, incorporation of an effective light trapping strategy and a high quality CQD film at the same time is an essential necessity. Using the developed optoelectronic model, the requirements for this incorporation approach and the expected efficiencies after its implementation are predicted as a roadmap for CQD solar cell research community.

  14. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  15. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  16. A grey NGM(1,1, k) self-memory coupling prediction model for energy consumption prediction.

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.

  17. Risk predictive modelling for diabetes and cardiovascular disease.

    Science.gov (United States)

    Kengne, Andre Pascal; Masconi, Katya; Mbanya, Vivian Nchanchou; Lekoubou, Alain; Echouffo-Tcheugui, Justin Basile; Matsha, Tandi E

    2014-02-01

    Absolute risk models or clinical prediction models have been incorporated in guidelines, and are increasingly advocated as tools to assist risk stratification and guide prevention and treatments decisions relating to common health conditions such as cardiovascular disease (CVD) and diabetes mellitus. We have reviewed the historical development and principles of prediction research, including their statistical underpinning, as well as implications for routine practice, with a focus on predictive modelling for CVD and diabetes. Predictive modelling for CVD risk, which has developed over the last five decades, has been largely influenced by the Framingham Heart Study investigators, while it is only ∼20 years ago that similar efforts were started in the field of diabetes. Identification of predictive factors is an important preliminary step which provides the knowledge base on potential predictors to be tested for inclusion during the statistical derivation of the final model. The derived models must then be tested both on the development sample (internal validation) and on other populations in different settings (external validation). Updating procedures (e.g. recalibration) should be used to improve the performance of models that fail the tests of external validation. Ultimately, the effect of introducing validated models in routine practice on the process and outcomes of care as well as its cost-effectiveness should be tested in impact studies before wide dissemination of models beyond the research context. Several predictions models have been developed for CVD or diabetes, but very few have been externally validated or tested in impact studies, and their comparative performance has yet to be fully assessed. A shift of focus from developing new CVD or diabetes prediction models to validating the existing ones will improve their adoption in routine practice.

  18. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...

  19. Survival prediction model for postoperative hepatocellular carcinoma patients.

    Science.gov (United States)

    Ren, Zhihui; He, Shasha; Fan, Xiaotang; He, Fangping; Sang, Wei; Bao, Yongxing; Ren, Weixin; Zhao, Jinming; Ji, Xuewen; Wen, Hao

    2017-09-01

    This study is to establish a predictive index (PI) model of 5-year survival rate for patients with hepatocellular carcinoma (HCC) after radical resection and to evaluate its prediction sensitivity, specificity, and accuracy.Patients underwent HCC surgical resection were enrolled and randomly divided into prediction model group (101 patients) and model evaluation group (100 patients). Cox regression model was used for univariate and multivariate survival analysis. A PI model was established based on multivariate analysis and receiver operating characteristic (ROC) curve was drawn accordingly. The area under ROC (AUROC) and PI cutoff value was identified.Multiple Cox regression analysis of prediction model group showed that neutrophil to lymphocyte ratio, histological grade, microvascular invasion, positive resection margin, number of tumor, and postoperative transcatheter arterial chemoembolization treatment were the independent predictors for the 5-year survival rate for HCC patients. The model was PI = 0.377 × NLR + 0.554 × HG + 0.927 × PRM + 0.778 × MVI + 0.740 × NT - 0.831 × transcatheter arterial chemoembolization (TACE). In the prediction model group, AUROC was 0.832 and the PI cutoff value was 3.38. The sensitivity, specificity, and accuracy were 78.0%, 80%, and 79.2%, respectively. In model evaluation group, AUROC was 0.822, and the PI cutoff value was well corresponded to the prediction model group with sensitivity, specificity, and accuracy of 85.0%, 83.3%, and 84.0%, respectively.The PI model can quantify the mortality risk of hepatitis B related HCC with high sensitivity, specificity, and accuracy.

  20. Methodology for Designing Models Predicting Success of Infertility Treatment

    OpenAIRE

    Alireza Zarinara; Mohammad Mahdi Akhondi; Hojjat Zeraati; Koorsh Kamali; Kazem Mohammad

    2016-01-01

    Abstract Background: The prediction models for infertility treatment success have presented since 25 years ago. There are scientific principles for designing and applying the prediction models that is also used to predict the success rate of infertility treatment. The purpose of this study is to provide basic principles for designing the model to predic infertility treatment success. Materials and Methods: In this paper, the principles for developing predictive models are explained and...

  1. Development of a high-fidelity numerical model for hazard prediction in the urban environment

    International Nuclear Information System (INIS)

    Lien, F.S.; Yee, E.; Ji, H.; Keats, A.; Hsieh, K.J.

    2005-01-01

    The release of chemical, biological, radiological, or nuclear (CBRN) agents by terrorists or rogue states in a North American city (densely populated urban centre) and the subsequent exposure, deposition, and contamination are emerging threats in an uncertain world. The transport, dispersion, deposition, and fate of a CBRN agent released in an urban environment is an extremely complex problem that encompasses potentially multiple space and time scales. The availability of high-fidelity, time-dependent models for the prediction of a CBRN agent's movement and fate in a complex urban environment can provide the strongest technical and scientific foundation for support of Canada's more broadly based effort at advancing counter-terrorism planning and operational capabilities. The objective of this paper is to report the progress of developing and validating an integrated, state-of-the-art, high-fidelity multi-scale, multi-physics modeling system for the accurate and efficient prediction of urban flow and dispersion of CBRN materials. Development of this proposed multi-scale modeling system will provide the real-time modeling and simulation tool required to predict injuries, casualties, and contamination and to make relevant decisions (based on the strongest technical and scientific foundations) in order to minimize the consequences of a CBRN incident based on a pre-determined decision making framework. (author)

  2. An Entropy-Based Kernel Learning Scheme toward Efficient Data Prediction in Cloud-Assisted Network Environments

    Directory of Open Access Journals (Sweden)

    Xiong Luo

    2016-07-01

    Full Text Available With the recent emergence of wireless sensor networks (WSNs in the cloud computing environment, it is now possible to monitor and gather physical information via lots of sensor nodes to meet the requirements of cloud services. Generally, those sensor nodes collect data and send data to sink node where end-users can query all the information and achieve cloud applications. Currently, one of the main disadvantages in the sensor nodes is that they are with limited physical performance relating to less memory for storage and less source of power. Therefore, in order to avoid such limitation, it is necessary to develop an efficient data prediction method in WSN. To serve this purpose, by reducing the redundant data transmission between sensor nodes and sink node while maintaining the required acceptable errors, this article proposes an entropy-based learning scheme for data prediction through the use of kernel least mean square (KLMS algorithm. The proposed scheme called E-KLMS develops a mechanism to maintain the predicted data synchronous at both sides. Specifically, the kernel-based method is able to adjust the coefficients adaptively in accordance with every input, which will achieve a better performance with smaller prediction errors, while employing information entropy to remove these data which may cause relatively large errors. E-KLMS can effectively solve the tradeoff problem between prediction accuracy and computational efforts while greatly simplifying the training structure compared with some other data prediction approaches. What’s more, the kernel-based method and entropy technique could ensure the prediction effect by both improving the accuracy and reducing errors. Experiments with some real data sets have been carried out to validate the efficiency and effectiveness of E-KLMS learning scheme, and the experiment results show advantages of the our method in prediction accuracy and computational time.

  3. Sentinel node status prediction by four statistical models: results from a large bi-institutional series (n = 1132).

    Science.gov (United States)

    Mocellin, Simone; Thompson, John F; Pasquali, Sandro; Montesco, Maria C; Pilati, Pierluigi; Nitti, Donato; Saw, Robyn P; Scolyer, Richard A; Stretch, Jonathan R; Rossi, Carlo R

    2009-12-01

    To improve selection for sentinel node (SN) biopsy (SNB) in patients with cutaneous melanoma using statistical models predicting SN status. About 80% of patients currently undergoing SNB are node negative. In the absence of conclusive evidence of a SNBassociated survival benefit, these patients may be over-treated. Here, we tested the efficiency of 4 different models in predicting SN status. The clinicopathologic data (age, gender, tumor thickness, Clark level, regression, ulceration, histologic subtype, and mitotic index) of 1132 melanoma patients who had undergone SNB at institutions in Italy and Australia were analyzed. Logistic regression, classification tree, random forest, and support vector machine models were fitted to the data. The predictive models were built with the aim of maximizing the negative predictive value (NPV) and reducing the rate of SNB procedures though minimizing the error rate. After cross-validation logistic regression, classification tree, random forest, and support vector machine predictive models obtained clinically relevant NPV (93.6%, 94.0%, 97.1%, and 93.0%, respectively), SNB reduction (27.5%, 29.8%, 18.2%, and 30.1%, respectively), and error rates (1.8%, 1.8%, 0.5%, and 2.1%, respectively). Using commonly available clinicopathologic variables, predictive models can preoperatively identify a proportion of patients ( approximately 25%) who might be spared SNB, with an acceptable (1%-2%) error. If validated in large prospective series, these models might be implemented in the clinical setting for improved patient selection, which ultimately would lead to better quality of life for patients and optimization of resource allocation for the health care system.

  4. Development of a disease risk prediction model for downy mildew (Peronospora sparsa) in boysenberry.

    Science.gov (United States)

    Kim, Kwang Soo; Beresford, Robert M; Walter, Monika

    2014-01-01

    Downy mildew caused by Peronospora sparsa has resulted in serious production losses in boysenberry (Rubus hybrid), blackberry (Rubus fruticosus), and rose (Rosa sp.) in New Zealand, Mexico, and the United States and the United Kingdom, respectively. Development of a model to predict downy mildew risk would facilitate development and implementation of a disease warning system for efficient fungicide spray application in the crops affected by this disease. Because detailed disease observation data were not available, a two-step approach was applied to develop an empirical risk prediction model for P. sparsa. To identify the weather patterns associated with a high incidence of downy mildew berry infections (dryberry disease) and derive parameters for the empirical model, classification and regression tree (CART) analysis was performed. Then, fuzzy sets were applied to develop a simple model to predict the disease risk based on the parameters derived from the CART analysis. High-risk seasons with a boysenberry downy mildew incidence >10% coincided with months when the number of hours per day with temperature of 15 to 20°C averaged >9.8 over the month and the number of days with rainfall in the month was >38.7%. The Fuzzy Peronospora Sparsa (FPS) model, developed using fuzzy sets, defined relationships among high-risk events, temperature, and rainfall conditions. In a validation study, the FPS model provided correct identification of both seasons with high downy mildew risk for boysenberry, blackberry, and rose and low risk in seasons when no disease was observed. As a result, the FPS model had a significant degree of agreement between predicted and observed risks of downy mildew for those crops (P = 0.002).

  5. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites

    Czech Academy of Sciences Publication Activity Database

    Zhou, Y.; Wu, X.; Weiming, J.; Chen, J.; Wang, S.; Wang, H.; Wenping, Y.; Black, T. A.; Jassal, R.; Ibrom, A.; Han, S.; Yan, J.; Margolis, H.; Roupsard, O.; Li, Y.; Zhao, F.; Kiely, G.; Starr, G.; Pavelka, Marian; Montagnani, L.; Wohlfahrt, G.; D'Odorico, P.; Cook, D.; Altaf Arain, M.; Bonal, D.; Beringer, J.; Blanken, P. D.; Loubet, B.; Leclerc, M. Y.; Matteucci, G.; Nagy, Z.; Olejnik, Janusz; U., K. T. P.; Varlagin, A.

    2016-01-01

    Roč. 36, č. 7 (2016), s. 2743-2760 ISSN 2169-8953 Institutional support: RVO:67179843 Keywords : global parametrization * predicting model * FlUXNET Subject RIV: EH - Ecology, Behaviour Impact factor: 3.395, year: 2016

  6. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  7. Using Patient Demographics and Statistical Modeling to Predict Knee Tibia Component Sizing in Total Knee Arthroplasty.

    Science.gov (United States)

    Ren, Anna N; Neher, Robert E; Bell, Tyler; Grimm, James

    2018-06-01

    Preoperative planning is important to achieve successful implantation in primary total knee arthroplasty (TKA). However, traditional TKA templating techniques are not accurate enough to predict the component size to a very close range. With the goal of developing a general predictive statistical model using patient demographic information, ordinal logistic regression was applied to build a proportional odds model to predict the tibia component size. The study retrospectively collected the data of 1992 primary Persona Knee System TKA procedures. Of them, 199 procedures were randomly selected as testing data and the rest of the data were randomly partitioned between model training data and model evaluation data with a ratio of 7:3. Different models were trained and evaluated on the training and validation data sets after data exploration. The final model had patient gender, age, weight, and height as independent variables and predicted the tibia size within 1 size difference 96% of the time on the validation data, 94% of the time on the testing data, and 92% on a prospective cadaver data set. The study results indicated the statistical model built by ordinal logistic regression can increase the accuracy of tibia sizing information for Persona Knee preoperative templating. This research shows statistical modeling may be used with radiographs to dramatically enhance the templating accuracy, efficiency, and quality. In general, this methodology can be applied to other TKA products when the data are applicable. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Measurement Of Technical Efficiency In Irrigated Vegetable ...

    African Journals Online (AJOL)

    This study measured technical efficiency and identified its determinants in irrigated vegetable production in Nasarawa State of Nigeria using a stochastic frontier model. A complete enumeration of 193 NADP-registered vegetable farmers was done. The predicted farm technical efficiency ranges from 25.94 to 96.24 per cent ...

  9. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  10. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    Science.gov (United States)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCity

  11. Calcite Phase Conversion Prediction Model for CaO-Al2O3-SiO2 Slag: An Aqueous Carbonation Process at Ambient Pressure

    Science.gov (United States)

    Zhang, Huining; Dong, Jianhong; Li, Hui; Xiong, Huihui; Xu, Anjun

    2018-06-01

    To evaluate the effect of the mineralogical phase on carbonation efficiency for CaO-Al2O3-SiO2 slag, a calcite phase conversion prediction model is proposed. This model combines carbon dioxide solubility with carbonation reaction kinetic analysis to improve the prediction capability. The effect of temperature and carbonation time on the carbonation degree is studied in detail. Results show that the reaction rate constant ranges from 0.0135 h-1 to 0.0458 h-1 and that the mineralogical phase contribution sequence for the carbonation degree is C2S, CaO, C3A and CS. The model accurately predicts the effect of temperature and carbonation time on the simulated calcite conversion, and the results agree with the experimental data. The optimal carbonation temperature and reaction time are 333 K and 90 min, respectively. The maximum carbonation efficiency is about 184.3 g/kg slag, and the simulation result of the calcite phase content in carbonated slag is about 20%.

  12. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss...... in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  13. Modeling and Control of CSTR using Model based Neural Network Predictive Control

    OpenAIRE

    Shrivastava, Piyush

    2012-01-01

    This paper presents a predictive control strategy based on neural network model of the plant is applied to Continuous Stirred Tank Reactor (CSTR). This system is a highly nonlinear process; therefore, a nonlinear predictive method, e.g., neural network predictive control, can be a better match to govern the system dynamics. In the paper, the NN model and the way in which it can be used to predict the behavior of the CSTR process over a certain prediction horizon are described, and some commen...

  14. Predicting Efficient Antenna Ligands for Tb(III) Emission

    Energy Technology Data Exchange (ETDEWEB)

    Samuel, Amanda P.S.; Xu, Jide; Raymond, Kenneth

    2008-10-06

    A series of highly luminescent Tb(III) complexes of para-substituted 2-hydroxyisophthalamide ligands (5LI-IAM-X) has been prepared (X = H, CH{sub 3}, (C=O)NHCH{sub 3}, SO{sub 3}{sup -}, NO{sub 2}, OCH{sub 3}, F, Cl, Br) to probe the effect of substituting the isophthalamide ring on ligand and Tb(III) emission in order to establish a method for predicting the effects of chromophore modification on Tb(III) luminescence. The energies of the ligand singlet and triplet excited states are found to increase linearly with the {pi}-withdrawing ability of the substituent. The experimental results are supported by time-dependent density functional theory (TD-DFT) calculations performed on model systems, which predict ligand singlet and triplet energies within {approx}5% of the experimental values. The quantum yield ({Phi}) values of the Tb(III) complex increases with the triplet energy of the ligand, which is in part due to the decreased non-radiative deactivation caused by thermal repopulation of the triplet. Together, the experimental and theoretical results serve as a predictive tool that can be used to guide the synthesis of ligands used to sensitize lanthanide luminescence.

  15. Robust Distributed Model Predictive Load Frequency Control of Interconnected Power System

    Directory of Open Access Journals (Sweden)

    Xiangjie Liu

    2013-01-01

    Full Text Available Considering the load frequency control (LFC of large-scale power system, a robust distributed model predictive control (RDMPC is presented. The system uncertainty according to power system parameter variation alone with the generation rate constraints (GRC is included in the synthesis procedure. The entire power system is composed of several control areas, and the problem is formulated as convex optimization problem with linear matrix inequalities (LMI that can be solved efficiently. It minimizes an upper bound on a robust performance objective for each subsystem. Simulation results show good dynamic response and robustness in the presence of power system dynamic uncertainties.

  16. Consensus models to predict endocrine disruption for all ...

    Science.gov (United States)

    Humans are potentially exposed to tens of thousands of man-made chemicals in the environment. It is well known that some environmental chemicals mimic natural hormones and thus have the potential to be endocrine disruptors. Most of these environmental chemicals have never been tested for their ability to disrupt the endocrine system, in particular, their ability to interact with the estrogen receptor. EPA needs tools to prioritize thousands of chemicals, for instance in the Endocrine Disruptor Screening Program (EDSP). Collaborative Estrogen Receptor Activity Prediction Project (CERAPP) was intended to be a demonstration of the use of predictive computational models on HTS data including ToxCast and Tox21 assays to prioritize a large chemical universe of 32464 unique structures for one specific molecular target – the estrogen receptor. CERAPP combined multiple computational models for prediction of estrogen receptor activity, and used the predicted results to build a unique consensus model. Models were developed in collaboration between 17 groups in the U.S. and Europe and applied to predict the common set of chemicals. Structure-based techniques such as docking and several QSAR modeling approaches were employed, mostly using a common training set of 1677 compounds provided by U.S. EPA, to build a total of 42 classification models and 8 regression models for binding, agonist and antagonist activity. All predictions were evaluated on ToxCast data and on an exte

  17. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  18. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  19. Preclinical models used for immunogenicity prediction of therapeutic proteins.

    Science.gov (United States)

    Brinks, Vera; Weinbuch, Daniel; Baker, Matthew; Dean, Yann; Stas, Philippe; Kostense, Stefan; Rup, Bonita; Jiskoot, Wim

    2013-07-01

    All therapeutic proteins are potentially immunogenic. Antibodies formed against these drugs can decrease efficacy, leading to drastically increased therapeutic costs and in rare cases to serious and sometimes life threatening side-effects. Many efforts are therefore undertaken to develop therapeutic proteins with minimal immunogenicity. For this, immunogenicity prediction of candidate drugs during early drug development is essential. Several in silico, in vitro and in vivo models are used to predict immunogenicity of drug leads, to modify potentially immunogenic properties and to continue development of drug candidates with expected low immunogenicity. Despite the extensive use of these predictive models, their actual predictive value varies. Important reasons for this uncertainty are the limited/insufficient knowledge on the immune mechanisms underlying immunogenicity of therapeutic proteins, the fact that different predictive models explore different components of the immune system and the lack of an integrated clinical validation. In this review, we discuss the predictive models in use, summarize aspects of immunogenicity that these models predict and explore the merits and the limitations of each of the models.

  20. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  1. A Grey NGM(1,1, k) Self-Memory Coupling Prediction Model for Energy Consumption Prediction

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span. PMID:25054174

  2. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior...... and predictive inferences under different reasonable choices of prior distribution in sensitivity analysis have been presented....

  3. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  4. Prediction of hourly solar radiation with multi-model framework

    International Nuclear Information System (INIS)

    Wu, Ji; Chan, Chee Keong

    2013-01-01

    Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others

  5. Revised predictive equations for salt intrusion modelling in estuaries

    NARCIS (Netherlands)

    Gisen, J.I.A.; Savenije, H.H.G.; Nijzink, R.C.

    2015-01-01

    For one-dimensional salt intrusion models to be predictive, we need predictive equations to link model parameters to observable hydraulic and geometric variables. The one-dimensional model of Savenije (1993b) made use of predictive equations for the Van der Burgh coefficient $K$ and the dispersion

  6. Preprocedural Prediction Model for Contrast-Induced Nephropathy Patients.

    Science.gov (United States)

    Yin, Wen-Jun; Yi, Yi-Hu; Guan, Xiao-Feng; Zhou, Ling-Yun; Wang, Jiang-Lin; Li, Dai-Yang; Zuo, Xiao-Cong

    2017-02-03

    Several models have been developed for prediction of contrast-induced nephropathy (CIN); however, they only contain patients receiving intra-arterial contrast media for coronary angiographic procedures, which represent a small proportion of all contrast procedures. In addition, most of them evaluate radiological interventional procedure-related variables. So it is necessary for us to develop a model for prediction of CIN before radiological procedures among patients administered contrast media. A total of 8800 patients undergoing contrast administration were randomly assigned in a 4:1 ratio to development and validation data sets. CIN was defined as an increase of 25% and/or 0.5 mg/dL in serum creatinine within 72 hours above the baseline value. Preprocedural clinical variables were used to develop the prediction model from the training data set by the machine learning method of random forest, and 5-fold cross-validation was used to evaluate the prediction accuracies of the model. Finally we tested this model in the validation data set. The incidence of CIN was 13.38%. We built a prediction model with 13 preprocedural variables selected from 83 variables. The model obtained an area under the receiver-operating characteristic (ROC) curve (AUC) of 0.907 and gave prediction accuracy of 80.8%, sensitivity of 82.7%, specificity of 78.8%, and Matthews correlation coefficient of 61.5%. For the first time, 3 new factors are included in the model: the decreased sodium concentration, the INR value, and the preprocedural glucose level. The newly established model shows excellent predictive ability of CIN development and thereby provides preventative measures for CIN. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  7. Time dependent patient no-show predictive modelling development.

    Science.gov (United States)

    Huang, Yu-Li; Hanauer, David A

    2016-05-09

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.

  8. Bioavailability of particulate metal to zebra mussels: biodynamic modelling shows that assimilation efficiencies are site-specific.

    Science.gov (United States)

    Bourgeault, Adeline; Gourlay-Francé, Catherine; Priadi, Cindy; Ayrault, Sophie; Tusseau-Vuillemin, Marie-Hélène

    2011-12-01

    This study investigates the ability of the biodynamic model to predict the trophic bioaccumulation of cadmium (Cd), chromium (Cr), copper (Cu), nickel (Ni) and zinc (Zn) in a freshwater bivalve. Zebra mussels were transplanted to three sites along the Seine River (France) and collected monthly for 11 months. Measurements of the metal body burdens in mussels were compared with the predictions from the biodynamic model. The exchangeable fraction of metal particles did not account for the bioavailability of particulate metals, since it did not capture the differences between sites. The assimilation efficiency (AE) parameter is necessary to take into account biotic factors influencing particulate metal bioavailability. The biodynamic model, applied with AEs from the literature, overestimated the measured concentrations in zebra mussels, the extent of overestimation being site-specific. Therefore, an original methodology was proposed for in situ AE measurements for each site and metal. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Model Predictive Control of the Exit Part Temperature for an Austenitization Furnace

    Directory of Open Access Journals (Sweden)

    Hari S. Ganesh

    2016-12-01

    Full Text Available Quench hardening is the process of strengthening and hardening ferrous metals and alloys by heating the material to a specific temperature to form austenite (austenitization, followed by rapid cooling (quenching in water, brine or oil to introduce a hardened phase called martensite. The material is then often tempered to increase toughness, as it may decrease from the quench hardening process. The austenitization process is highly energy-intensive and many of the industrial austenitization furnaces were built and equipped prior to the advent of advanced control strategies and thus use large, sub-optimal amounts of energy. The model computes the energy usage of the furnace and the part temperature profile as a function of time and position within the furnace under temperature feedback control. In this paper, the aforementioned model is used to simulate the furnace for a batch of forty parts under heuristic temperature set points suggested by the operators of the plant. A model predictive control (MPC system is then developed and deployed to control the the part temperature at the furnace exit thereby preventing the parts from overheating. An energy efficiency gain of 5.3 % was obtained under model predictive control compared to operation under heuristic temperature set points tracked by a regulatory control layer.

  10. Development of a 3D finite element acoustic model to predict the sound reduction index of stud based double-leaf walls

    Science.gov (United States)

    Arjunan, A.; Wang, C. J.; Yahiaoui, K.; Mynors, D. J.; Morgan, T.; Nguyen, V. B.; English, M.

    2014-11-01

    Building standards incorporating quantitative acoustical criteria to ensure adequate sound insulation are now being implemented. Engineers are making great efforts to design acoustically efficient double-wall structures. Accordingly, efficient simulation models to predict the acoustic insulation of double-leaf wall structures are needed. This paper presents the development of a numerical tool that can predict the frequency dependent sound reduction index R of stud based double-leaf walls at one-third-octave band frequency range. A fully vibro-acoustic 3D model consisting of two rooms partitioned using a double-leaf wall, considering the structure and acoustic fluid coupling incorporating the existing fluid and structural solvers are presented. The validity of the finite element (FE) model is assessed by comparison with experimental test results carried out in a certified laboratory. Accurate representation of the structural damping matrix to effectively predict the R values are studied. The possibilities of minimising the simulation time using a frequency dependent mesh model was also investigated. The FEA model presented in this work is capable of predicting the weighted sound reduction index Rw along with A-weighted pink noise C and A-weighted urban noise Ctr within an error of 1 dB. The model developed can also be used to analyse the acoustically induced frequency dependent geometrical behaviour of the double-leaf wall components to optimise them for best acoustic performance. The FE modelling procedure reported in this paper can be extended to other building components undergoing fluid-structure interaction (FSI) to evaluate their acoustic insulation.

  11. Model predictive control using fuzzy decision functions

    NARCIS (Netherlands)

    Kaymak, U.; Costa Sousa, da J.M.

    2001-01-01

    Fuzzy predictive control integrates conventional model predictive control with techniques from fuzzy multicriteria decision making, translating the goals and the constraints to predictive control in a transparent way. The information regarding the (fuzzy) goals and the (fuzzy) constraints of the

  12. Predicting and Modelling of Survival Data when Cox's Regression Model does not hold

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...

  13. Active load management in an intelligent building using model predictive control strategy

    DEFF Research Database (Denmark)

    Zong, Yi; Kullmann, Daniel; Thavlov, Anders

    2011-01-01

    This paper introduces PowerFlexHouse, a research facility for exploring the technical potential of active load management in a distributed power system (SYSLAB) with a high penetration of renewable energy and presents in detail on how to implement a thermal model predictive controller for load...... shifting in PowerFlexHouse heaters' power consumption scheme. With this demand side control study, it is expected that this method of demand response can dramatically raise energy efficiencies and improve grid reliability, when there is a high penetration of intermittent energy resources in the power...

  14. Coordinated Voltage Control of a Wind Farm based on Model Predictive Control

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Guo, Qinglai

    2016-01-01

    This paper presents an autonomous wind farm voltage controller based on Model Predictive Control (MPC). The reactive power compensation and voltage regulation devices of the wind farm include Static Var Compensators (SVCs), Static Var Generators (SVGs), Wind Turbine Generators (WTGs) and On...... are calculated based on an analytical method to improve the computation efficiency and overcome the convergence problem. Two control modes are designed for both voltage violated and normal operation conditions. A wind farm with 20 wind turbines was used to conduct case studies to verify the proposed coordinated...

  15. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  16. Validation of AquaCrop Model for Simulation of Winter Wheat Yield and Water Use Efficiency under Simultaneous Salinity and Water Stress

    OpenAIRE

    M. Mohammadi; B. Ghahraman; K. Davary; H. Ansari; A. Shahidi

    2016-01-01

    Introduction: FAO AquaCrop model (Raes et al., 2009a; Steduto et al., 2009) is a user-friendly and practitioner oriented type of model, because it maintains an optimal balance between accuracy, robustness, and simplicity; and it requires a relatively small number of model input parameters. The FAO AquaCrop model predicts crop productivity, water requirement, and water use efficiency under water-limiting and saline water conditions. This model has been tested and validated for different crops ...

  17. Efficient Implementation of Solvers for Linear Model Predictive Control on Embedded Devices

    DEFF Research Database (Denmark)

    Frison, Gianluca; Kwame Minde Kufoalor, D.; Imsland, Lars

    2014-01-01

    This paper proposes a novel approach for the efficient implementation of solvers for linear MPC on embedded devices. The main focus is to explain in detail the approach used to optimize the linear algebra for selected low-power embedded devices, and to show how the high-performance implementation...

  18. Ozonolysis of Model Olefins-Efficiency of Antiozonants

    NARCIS (Netherlands)

    Huntink, N.M.; Datta, Rabin; Talma, Auke; Noordermeer, Jacobus W.M.

    2006-01-01

    In this study, the efficiency of several potential long lasting antiozonants was studied by ozonolysis of model olefins. 2-Methyl-2-pentene was selected as a model for natural rubber (NR) and 5-phenyl-2-hexene as a model for styrene butadiene rubber (SBR). A comparison was made between the

  19. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  20. Development and validation of a preoperative prediction model for colorectal cancer T-staging based on MDCT images and clinical information.

    Science.gov (United States)

    Sa, Sha; Li, Jing; Li, Xiaodong; Li, Yongrui; Liu, Xiaoming; Wang, Defeng; Zhang, Huimao; Fu, Yu

    2017-08-15

    This study aimed to establish and evaluate the efficacy of a prediction model for colorectal cancer T-staging. T-staging was positively correlated with the level of carcinoembryonic antigen (CEA), expression of carbohydrate antigen 19-9 (CA19-9), wall deformity, blurred outer edges, fat infiltration, infiltration into the surrounding tissue, tumor size and wall thickness. Age, location, enhancement rate and enhancement homogeneity were negatively correlated with T-staging. The predictive results of the model were consistent with the pathological gold standard, and the kappa value was 0.805. The total accuracy of staging improved from 51.04% to 86.98% with the proposed model. The clinical, imaging and pathological data of 611 patients with colorectal cancer (419 patients in the training group and 192 patients in the validation group) were collected. A spearman correlation analysis was used to validate the relationship among these factors and pathological T-staging. A prediction model was trained with the random forest algorithm. T staging of the patients in the validation group was predicted by both prediction model and traditional method. The consistency, accuracy, sensitivity, specificity and area under the curve (AUC) were used to compare the efficacy of the two methods. The newly established comprehensive model can improve the predictive efficiency of preoperative colorectal cancer T-staging.

  1. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  2. A physiological foundation for the nutrition-based efficiency wage model

    DEFF Research Database (Denmark)

    Dalgaard, Carl-Johan Lars; Strulik, Holger

    2011-01-01

    Drawing on recent research on allometric scaling and energy consumption, the present paper develops a nutrition-based efficiency wage model from first principles. The biologically micro-founded model allows us to address empirical criticism of the original nutrition-based efficiency wage model...

  3. Energy models for commercial energy prediction and substitution of renewable energy sources

    International Nuclear Information System (INIS)

    Iniyan, S.; Suganthi, L.; Samuel, Anand A.

    2006-01-01

    In this paper, three models have been projected namely Modified Econometric Mathematical (MEM) model, Mathematical Programming Energy-Economy-Environment (MPEEE) model, and Optimal Renewable Energy Mathematical (OREM) model. The actual demand for coal, oil and electricity is predicted using the MEM model based on economic, technological and environmental factors. The results were used in the MPEEE model, which determines the optimum allocation of commercial energy sources based on environmental limitations. The gap between the actual energy demand from the MEM model and optimal energy use from the MPEEE model, has to be met by the renewable energy sources. The study develops an OREM model that would facilitate effective utilization of renewable energy sources in India, based on cost, efficiency, social acceptance, reliability, potential and demand. The economic variations in solar energy systems and inclusion of environmental constraint are also analyzed with OREM model. The OREM model will help policy makers in the formulation and implementation of strategies concerning renewable energy sources in India for the next two decades

  4. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  5. Modeling of Methods to Control Heat-Consumption Efficiency

    Science.gov (United States)

    Tsynaeva, E. A.; Tsynaeva, A. A.

    2016-11-01

    In this work, consideration has been given to thermophysical processes in automated heat consumption control systems (AHCCSs) of buildings, flow diagrams of these systems, and mathematical models describing the thermophysical processes during the systems' operation; an analysis of adequacy of the mathematical models has been presented. A comparison has been made of the operating efficiency of the systems and the methods to control the efficiency. It has been determined that the operating efficiency of an AHCCS depends on its diagram and the temperature chart of central quality control (CQC) and also on the temperature of a low-grade heat source for the system with a heat pump.

  6. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  7. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  8. Experimental Analysis and Full Prediction Model of a 5-DOF Motorized Spindle

    Directory of Open Access Journals (Sweden)

    Weiyu Zhang

    2017-01-01

    Full Text Available The cost and power consumption of DC power amplifiers are much greater than that of AC power converters. Compared to a motorized spindle supported with DC magnetic bearings, a motorized spindle supported with AC magnetic bearings is inexpensive and more efficient. This paper studies a five-degrees-of-freedom (5-DOF motorized spindle supported with AC hybrid magnetic bearings (HMBs. Most models of suspension forces, except a “switching model”, are quite accurate, but only in a particular operating area and not in regional coverage. If a “switching model” is applied to a 5-DOF motorized spindle, the real-time performance of the control system can be significantly decreased due to the large amount of data processing for both displacement and current. In order to solve this defect, experiments based on the “switching model” are performed, and the resulting data are analyzed. Using the data analysis results, a “full prediction model” based on the operating state is proposed to improve real-time performance and precision. Finally, comparative, verification and stiffness tests are conducted to verify the improvement of the proposed model. Results of the tests indicate that the rotor has excellent characteristics, such as good real-time performance, superior anti-interference performance with load and the accuracy of the model in full zone. The satisfactory experimental results demonstrate the effectiveness of the “full prediction model” applied to the control system under different operating stages. Therefore, the results of the experimental analysis and the proposed full prediction model can provide a control system of a 5-DOF motorized spindle with the most suitable mathematical models of the suspension force.

  9. Using Pareto points for model identification in predictive toxicology

    Science.gov (United States)

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  10. Computationally efficient models of neuromuscular recruitment and mechanics

    Science.gov (United States)

    Song, D.; Raphael, G.; Lan, N.; Loeb, G. E.

    2008-06-01

    We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (Leff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.

  11. Computationally efficient models of neuromuscular recruitment and mechanics.

    Science.gov (United States)

    Song, D; Raphael, G; Lan, N; Loeb, G E

    2008-06-01

    We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (Leff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.

  12. Towards social autonomous vehicles: Efficient collision avoidance scheme using Richardson’s arms race model

    Science.gov (United States)

    Niazi, Muaz A.

    2017-01-01

    This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson’s arms race model has also been presented. The performance of the proposed social agent has been validated at two levels–firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme. PMID:29040294

  13. Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction

    Science.gov (United States)

    Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro

    Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.

  14. Learning receptive fields using predictive feedback.

    Science.gov (United States)

    Jehee, Janneke F M; Rothkopf, Constantin; Beck, Jeffrey M; Ballard, Dana H

    2006-01-01

    Previously, it was suggested that feedback connections from higher- to lower-level areas carry predictions of lower-level neural activities, whereas feedforward connections carry the residual error between the predictions and the actual lower-level activities [Rao, R.P.N., Ballard, D.H., 1999. Nature Neuroscience 2, 79-87.]. A computational model implementing the hypothesis learned simple cell receptive fields when exposed to natural images. Here, we use predictive feedback to explain tuning properties in medial superior temporal area (MST). We implement the hypothesis using a new, biologically plausible, algorithm based on matching pursuit, which retains all the features of the previous implementation, including its ability to efficiently encode input. When presented with natural images, the model developed receptive field properties as found in primary visual cortex. In addition, when exposed to visual motion input resulting from movements through space, the model learned receptive field properties resembling those in MST. These results corroborate the idea that predictive feedback is a general principle used by the visual system to efficiently encode natural input.

  15. A voxel-based finite element model for the prediction of bladder deformation

    Energy Technology Data Exchange (ETDEWEB)

    Xiangfei, Chai; Herk, Marcel van; Hulshof, Maarten C. C. M.; Bel, Arjan [Radiation Oncology Department, Academic Medical Center, University of Amsterdam, 1105 AZ Amsterdam (Netherlands); Radiation Oncology Department, Netherlands Cancer Institute, 1066 CX Amsterdam (Netherlands); Radiation Oncology Department, Academic Medical Center, University of Amsterdam, 1105 AZ Amsterdam (Netherlands)

    2012-01-15

    Purpose: A finite element (FE) bladder model was previously developed to predict bladder deformation caused by bladder filling change. However, two factors prevent a wide application of FE models: (1) the labor required to construct a FE model with high quality mesh and (2) long computation time needed to construct the FE model and solve the FE equations. In this work, we address these issues by constructing a low-resolution voxel-based FE bladder model directly from the binary segmentation images and compare the accuracy and computational efficiency of the voxel-based model used to simulate bladder deformation with those of a classical FE model with a tetrahedral mesh. Methods: For ten healthy volunteers, a series of MRI scans of the pelvic region was recorded at regular intervals of 10 min over 1 h. For this series of scans, the bladder volume gradually increased while rectal volume remained constant. All pelvic structures were defined from a reference image for each volunteer, including bladder wall, small bowel, prostate (male), uterus (female), rectum, pelvic bone, spine, and the rest of the body. Four separate FE models were constructed from these structures: one with a tetrahedral mesh (used in previous study), one with a uniform hexahedral mesh, one with a nonuniform hexahedral mesh, and one with a low-resolution nonuniform hexahedral mesh. Appropriate material properties were assigned to all structures and uniform pressure was applied to the inner bladder wall to simulate bladder deformation from urine inflow. Performance of the hexahedral meshes was evaluated against the performance of the standard tetrahedral mesh by comparing the accuracy of bladder shape prediction and computational efficiency. Results: FE model with a hexahedral mesh can be quickly and automatically constructed. No substantial differences were observed between the simulation results of the tetrahedral mesh and hexahedral meshes (<1% difference in mean dice similarity coefficient to

  16. A voxel-based finite element model for the prediction of bladder deformation

    International Nuclear Information System (INIS)

    Chai Xiangfei; Herk, Marcel van; Hulshof, Maarten C. C. M.; Bel, Arjan

    2012-01-01

    Purpose: A finite element (FE) bladder model was previously developed to predict bladder deformation caused by bladder filling change. However, two factors prevent a wide application of FE models: (1) the labor required to construct a FE model with high quality mesh and (2) long computation time needed to construct the FE model and solve the FE equations. In this work, we address these issues by constructing a low-resolution voxel-based FE bladder model directly from the binary segmentation images and compare the accuracy and computational efficiency of the voxel-based model used to simulate bladder deformation with those of a classical FE model with a tetrahedral mesh. Methods: For ten healthy volunteers, a series of MRI scans of the pelvic region was recorded at regular intervals of 10 min over 1 h. For this series of scans, the bladder volume gradually increased while rectal volume remained constant. All pelvic structures were defined from a reference image for each volunteer, including bladder wall, small bowel, prostate (male), uterus (female), rectum, pelvic bone, spine, and the rest of the body. Four separate FE models were constructed from these structures: one with a tetrahedral mesh (used in previous study), one with a uniform hexahedral mesh, one with a nonuniform hexahedral mesh, and one with a low-resolution nonuniform hexahedral mesh. Appropriate material properties were assigned to all structures and uniform pressure was applied to the inner bladder wall to simulate bladder deformation from urine inflow. Performance of the hexahedral meshes was evaluated against the performance of the standard tetrahedral mesh by comparing the accuracy of bladder shape prediction and computational efficiency. Results: FE model with a hexahedral mesh can be quickly and automatically constructed. No substantial differences were observed between the simulation results of the tetrahedral mesh and hexahedral meshes (<1% difference in mean dice similarity coefficient to

  17. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  18. EFFICIENCY AND COST MODELLING OF THERMAL POWER PLANTS

    Directory of Open Access Journals (Sweden)

    Péter Bihari

    2010-01-01

    Full Text Available The proper characterization of energy suppliers is one of the most important components in the modelling of the supply/demand relations of the electricity market. Power generation capacity i. e. power plants constitute the supply side of the relation in the electricity market. The supply of power stations develops as the power stations attempt to achieve the greatest profit possible with the given prices and other limitations. The cost of operation and the cost of load increment are thus the most important characteristics of their behaviour on the market. In most electricity market models, however, it is not taken into account that the efficiency of a power station also depends on the level of the load, on the type and age of the power plant, and on environmental considerations. The trade in electricity on the free market cannot rely on models where these essential parameters are omitted. Such an incomplete model could lead to a situation where a particular power station would be run either only at its full capacity or else be entirely deactivated depending on the prices prevailing on the free market. The reality is rather that the marginal cost of power generation might also be described by a function using the efficiency function. The derived marginal cost function gives the supply curve of the power station. The load level dependent efficiency function can be used not only for market modelling, but also for determining the pollutant and CO2 emissions of the power station, as well as shedding light on the conditions for successfully entering the market. Based on the measurement data our paper presents mathematical models that might be used for the determination of the load dependent efficiency functions of coal, oil, or gas fuelled power stations (steam turbine, gas turbine, combined cycle and IC engine based combined heat and power stations. These efficiency functions could also contribute to modelling market conditions and determining the

  19. A Pedestrian Approach to Indoor Temperature Distribution Prediction of a Passive Solar Energy Efficient House

    Directory of Open Access Journals (Sweden)

    Golden Makaka

    2015-01-01

    Full Text Available With the increase in energy consumption by buildings in keeping the indoor environment within the comfort levels and the ever increase of energy price there is need to design buildings that require minimal energy to keep the indoor environment within the comfort levels. There is need to predict the indoor temperature during the design stage. In this paper a statistical indoor temperature prediction model was developed. A passive solar house was constructed; thermal behaviour was simulated using ECOTECT and DOE computer software. The thermal behaviour of the house was monitored for a year. The indoor temperature was observed to be in the comfort level for 85% of the total time monitored. The simulation results were compared with the measured results and those from the prediction model. The statistical prediction model was found to agree (95% with the measured results. Simulation results were observed to agree (96% with the statistical prediction model. Modeled indoor temperature was most sensitive to the outdoor temperatures variations. The daily mean peak ones were found to be more pronounced in summer (5% than in winter (4%. The developed model can be used to predict the instantaneous indoor temperature for a specific house design.

  20. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A; Giebel, G; Landberg, L [Risoe National Lab., Roskilde (Denmark); Madsen, H; Nielsen, H A [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  1. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  2. Short-Term Wind Speed Prediction Using EEMD-LSSVM Model

    Directory of Open Access Journals (Sweden)

    Aiqing Kang

    2017-01-01

    Full Text Available Hybrid Ensemble Empirical Mode Decomposition (EEMD and Least Square Support Vector Machine (LSSVM is proposed to improve short-term wind speed forecasting precision. The EEMD is firstly utilized to decompose the original wind speed time series into a set of subseries. Then the LSSVM models are established to forecast these subseries. Partial autocorrelation function is adopted to analyze the inner relationships between the historical wind speed series in order to determine input variables of LSSVM models for prediction of every subseries. Finally, the superposition principle is employed to sum the predicted values of every subseries as the final wind speed prediction. The performance of hybrid model is evaluated based on six metrics. Compared with LSSVM, Back Propagation Neural Networks (BP, Auto-Regressive Integrated Moving Average (ARIMA, combination of Empirical Mode Decomposition (EMD with LSSVM, and hybrid EEMD with ARIMA models, the wind speed forecasting results show that the proposed hybrid model outperforms these models in terms of six metrics. Furthermore, the scatter diagrams of predicted versus actual wind speed and histograms of prediction errors are presented to verify the superiority of the hybrid model in short-term wind speed prediction.

  3. PREDICTED PERCENTAGE DISSATISFIED (PPD) MODEL ...

    African Journals Online (AJOL)

    HOD

    their low power requirements, are relatively cheap and are environment friendly. ... PREDICTED PERCENTAGE DISSATISFIED MODEL EVALUATION OF EVAPORATIVE COOLING ... The performance of direct evaporative coolers is a.

  4. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  5. Modeling the prediction of business intelligence system effectiveness.

    Science.gov (United States)

    Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I

    2016-01-01

    Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important.

  6. [Application of ARIMA model on prediction of malaria incidence].

    Science.gov (United States)

    Jing, Xia; Hua-Xun, Zhang; Wen, Lin; Su-Jian, Pei; Ling-Cong, Sun; Xiao-Rong, Dong; Mu-Min, Cao; Dong-Ni, Wu; Shunxiang, Cai

    2016-01-29

    To predict the incidence of local malaria of Hubei Province applying the Autoregressive Integrated Moving Average model (ARIMA). SPSS 13.0 software was applied to construct the ARIMA model based on the monthly local malaria incidence in Hubei Province from 2004 to 2009. The local malaria incidence data of 2010 were used for model validation and evaluation. The model of ARIMA (1, 1, 1) (1, 1, 0) 12 was tested as relatively the best optimal with the AIC of 76.085 and SBC of 84.395. All the actual incidence data were in the range of 95% CI of predicted value of the model. The prediction effect of the model was acceptable. The ARIMA model could effectively fit and predict the incidence of local malaria of Hubei Province.

  7. A Structurally Simplified Hybrid Model of Genetic Algorithm and Support Vector Machine for Prediction of Chlorophyll a in Reservoirs

    Directory of Open Access Journals (Sweden)

    Jieqiong Su

    2015-04-01

    Full Text Available With decreasing water availability as a result of climate change and human activities, analysis of the influential factors and variation trends of chlorophyll a has become important to prevent reservoir eutrophication and ensure water supply safety. In this paper, a structurally simplified hybrid model of the genetic algorithm (GA and the support vector machine (SVM was developed for the prediction of monthly concentration of chlorophyll a in the Miyun Reservoir of northern China over the period from 2000 to 2010. Based on the influence factor analysis, the four most relevant influence factors of chlorophyll a (i.e., total phosphorus, total nitrogen, permanganate index, and reservoir storage were extracted using the method of feature selection with the GA, which simplified the model structure, making it more practical and efficient for environmental management. The results showed that the developed simplified GA-SVM model could solve nonlinear problems of complex system, and was suitable for the simulation and prediction of chlorophyll a with better performance in accuracy and efficiency in the Miyun Reservoir.

  8. Evolving chemometric models for predicting dynamic process parameters in viscose production

    Energy Technology Data Exchange (ETDEWEB)

    Cernuda, Carlos [Department of Knowledge-Based Mathematical Systems, Johannes Kepler University Linz (Austria); Lughofer, Edwin, E-mail: edwin.lughofer@jku.at [Department of Knowledge-Based Mathematical Systems, Johannes Kepler University Linz (Austria); Suppan, Lisbeth [Kompetenzzentrum Holz GmbH, St. Peter-Str. 25, 4021 Linz (Austria); Roeder, Thomas; Schmuck, Roman [Lenzing AG, 4860 Lenzing (Austria); Hintenaus, Peter [Software Research Center, Paris Lodron University Salzburg (Austria); Maerzinger, Wolfgang [i-RED Infrarot Systeme GmbH, Linz (Austria); Kasberger, Juergen [Recendt GmbH, Linz (Austria)

    2012-05-06

    Highlights: Black-Right-Pointing-Pointer Quality assurance of process parameters in viscose production. Black-Right-Pointing-Pointer Automatic prediction of spin-bath concentrations based on FTNIR spectra. Black-Right-Pointing-Pointer Evolving chemometric models for efficiently handling changing system dynamics over time (no time-intensive re-calibration needed). Black-Right-Pointing-Pointer Significant reduction of huge errors produced by statistical state-of-the-art calibration methods. Black-Right-Pointing-Pointer Sufficient flexibility achieved by gradual forgetting mechanisms. - Abstract: In viscose production, it is important to monitor three process parameters in order to assure a high quality of the final product: the concentrations of H{sub 2}SO{sub 4}, Na{sub 2}SO{sub 4} and Z{sub n}SO{sub 4}. During on-line production these process parameters usually show a quite high dynamics depending on the fiber type that is produced. Thus, conventional chemometric models, which are trained based on collected calibration spectra from Fourier transform near infrared (FT-NIR) measurements and kept fixed during the whole life-time of the on-line process, show a quite imprecise and unreliable behavior when predicting the concentrations of new on-line data. In this paper, we are demonstrating evolving chemometric models which are able to adapt automatically to varying process dynamics by updating their inner structures and parameters in a single-pass incremental manner. These models exploit the Takagi-Sugeno fuzzy model architecture, being able to model flexibly different degrees of non-linearities implicitly contained in the mapping between near infrared spectra (NIR) and reference values. Updating the inner structures is achieved by moving the position of already existing local regions and by evolving (increasing non-linearity) or merging (decreasing non-linearity) new local linear predictors on demand, which are guided by distance-based and similarity criteria. Gradual

  9. Pulmonary hypertension in patients with idiopathic pulmonary fibrosis - the predictive value of exercise capacity and gas exchange efficiency.

    Directory of Open Access Journals (Sweden)

    Sven Gläser

    Full Text Available Exercise capacity and survival of patients with IPF is potentially impaired by pulmonary hypertension. This study aims to investigate diagnostic and prognostic properties of gas exchange during exercise and lung function in IPF patients with or without pulmonary hypertension. In a multicentre setting, patients with IPF underwent right heart catheterization, cardiopulmonary exercise and lung function testing during their initial evaluation. Mortality follow up was evaluated. Seventy-three of 135 patients [82 males; median age of 64 (56; 72 years] with IPF had pulmonary hypertension as assessed by right heart catheterization [median mean pulmonary arterial pressure 34 (27; 43 mmHg]. The presence of pulmonary hypertension was best predicted by gas exchange efficiency for carbon dioxide (cut off ≥152% predicted; area under the curve 0.94 and peak oxygen uptake (≤56% predicted; 0.83, followed by diffusing capacity. Resting lung volumes did not predict pulmonary hypertension. Survival was best predicted by the presence of pulmonary hypertension, followed by peak oxygen uptake [HR 0.96 (0.93; 0.98]. Pulmonary hypertension in IPF patients is best predicted by gas exchange efficiency during exercise and peak oxygen uptake. In addition to invasively measured pulmonary arterial pressure, oxygen uptake at peak exercise predicts survival in this patient population.

  10. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  11. Observed and Model-Derived Ozone Production Efficiency over Urban and Rural New York State

    Directory of Open Access Journals (Sweden)

    Matthew Ninneman

    2017-07-01

    Full Text Available This study examined the model-derived and observed ozone production efficiency (OPE = ∆Ox/∆NOz in one rural location, Pinnacle State Park (PSP in Addison, New York (NY, and one urban location, Queens College (QC in Flushing, NY, in New York State (NYS during photo-chemically productive hours (11 a.m.–4 p.m. Eastern Standard Time (EST in summer 2016. Measurement data and model predictions from National Oceanic and Atmospheric Administration National Air Quality Forecast Capability (NOAA NAQFC—Community Multiscale Air Quality (CMAQ model versions 4.6 (v4.6 and 5.0.2 (v5.0.2 were used to assess the OPE at both sites. CMAQ-predicted and observed OPEs were often in poor agreement at PSP and in reasonable agreement at QC, with model-predicted and observed OPEs, ranging from approximately 5–11 and 10–13, respectively, at PSP; and 4–7 and 6–8, respectively, at QC. The observed relationship between OPE and oxides of nitrogen (NOx was studied at PSP to examine where the OPE downturn may have occurred. Summer 2016 observations at PSP did not reveal a distinct OPE downturn, but they did indicate that the OPE at PSP remained high (10 or greater regardless of the [NOx] level. The observed OPEs at QC were found by using species-specific reactive odd nitrogen (NOy instruments and an estimated value for nitrogen dioxide (NO2, since observed OPEs determined using non-specific NOx and NOy instruments yielded observed OPE results that (1 varied from approximately 11–25, (2 sometimes had negative [NOz] concentrations, and (3 were inconsistent with CMAQ-predicted OPE. This difference in observed OPEs at QC depending on the suite of instruments used suggests that species-specific NOx and NOy instruments may be needed to obtain reliable urban OPEs.

  12. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  13. Modeling light use efficiency in a subtropical mangrove forest equipped with CO2 eddy covariance

    Directory of Open Access Journals (Sweden)

    J. G. Barr

    2013-03-01

    Full Text Available Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based CO2 eddy covariance (EC systems are installed in only a few mangrove forests worldwide, and the longest EC record from the Florida Everglades contains less than 9 years of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI derived from the Moderate Resolution Imaging Spectroradiometer (MODIS that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE, and we present the first ever tower-based estimates of mangrove forest RE derived from nighttime CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt increase in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and

  14. Prediction of lithium-ion battery capacity with metabolic grey model

    International Nuclear Information System (INIS)

    Chen, Lin; Lin, Weilong; Li, Junzi; Tian, Binbin; Pan, Haihong

    2016-01-01

    Given the popularity of Lithium-ion batteries in EVs (electric vehicles), predicting the capacity quickly and accurately throughout a battery's full life-time is still a challenging issue for ensuring the reliability of EVs. This paper proposes an approach in predicting the varied capacity with discharge cycles based on metabolic grey theory and consider issues from two perspectives: 1) three metabolic grey models will be presented, including MGM (metabolic grey model), MREGM (metabolic Residual-error grey model), and MMREGM (metabolic Markov-residual-error grey model); 2) the universality of these models will be explored under different conditions (such as various discharge rates and temperatures). Furthermore, the research findings in this paper demonstrate the excellent performance of the prediction depending on the three models; however, the precision of the MREGM model is inferior compared to the others. Therefore, we have obtained the conclusion in which the MGM model and the MMREGM model have excellent performances in predicting the capacity under a variety of load conditions, even using few data points for modeling. Also, the universality of the metabolic grey prediction theory is verified by predicting the capacity of batteries under different discharge rates and different temperatures. - Highlights: • The metabolic mechanism is introduced in a grey system for capacity prediction. • Three metabolic grey models are presented and studied. • The universality of these models under different conditions is assessed. • A few data points are required for predicting the capacity with these models.

  15. Comparison of joint modeling and landmarking for dynamic prediction under an illness-death model.

    Science.gov (United States)

    Suresh, Krithika; Taylor, Jeremy M G; Spratt, Daniel E; Daignault, Stephanie; Tsodikov, Alexander

    2017-11-01

    Dynamic prediction incorporates time-dependent marker information accrued during follow-up to improve personalized survival prediction probabilities. At any follow-up, or "landmark", time, the residual time distribution for an individual, conditional on their updated marker values, can be used to produce a dynamic prediction. To satisfy a consistency condition that links dynamic predictions at different time points, the residual time distribution must follow from a prediction function that models the joint distribution of the marker process and time to failure, such as a joint model. To circumvent the assumptions and computational burden associated with a joint model, approximate methods for dynamic prediction have been proposed. One such method is landmarking, which fits a Cox model at a sequence of landmark times, and thus is not a comprehensive probability model of the marker process and the event time. Considering an illness-death model, we derive the residual time distribution and demonstrate that the structure of the Cox model baseline hazard and covariate effects under the landmarking approach do not have simple form. We suggest some extensions of the landmark Cox model that should provide a better approximation. We compare the performance of the landmark models with joint models using simulation studies and cognitive aging data from the PAQUID study. We examine the predicted probabilities produced under both methods using data from a prostate cancer study, where metastatic clinical failure is a time-dependent covariate for predicting death following radiation therapy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Artificial neural networks: an efficient tool for modelling and optimization of biofuel production (a mini review)

    International Nuclear Information System (INIS)

    Sewsynker-Sukai, Yeshona; Faloye, Funmilayo; Kana, Evariste Bosco Gueguim

    2016-01-01

    In view of the looming energy crisis as a result of depleting fossil fuel resources and environmental concerns from greenhouse gas emissions, the need for sustainable energy sources has secured global attention. Research is currently focused towards renewable sources of energy due to their availability and environmental friendliness. Biofuel production like other bioprocesses is controlled by several process parameters including pH, temperature and substrate concentration; however, the improvement of biofuel production requires a robust process model that accurately relates the effect of input variables to the process output. Artificial neural networks (ANNs) have emerged as a tool for modelling complex, non-linear processes. ANNs are applied in the prediction of various processes; they are useful for virtual experimentations and can potentially enhance bioprocess research and development. In this study, recent findings on the application of ANN for the modelling and optimization of biohydrogen, biogas, biodiesel, microbial fuel cell technology and bioethanol are reviewed. In addition, comparative studies on the modelling efficiency of ANN and other techniques such as the response surface methodology are briefly discussed. The review highlights the efficiency of ANNs as a modelling and optimization tool in biofuel process development

  17. A Grey NGM(1,1,k Self-Memory Coupling Prediction Model for Energy Consumption Prediction

    Directory of Open Access Journals (Sweden)

    Xiaojun Guo

    2014-01-01

    Full Text Available Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1,k self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1,k model. The traditional grey model’s weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1,k self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.

  18. Improving Predictive Modeling in Pediatric Drug Development: Pharmacokinetics, Pharmacodynamics, and Mechanistic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Slikker, William; Young, John F.; Corley, Rick A.; Dorman, David C.; Conolly, Rory B.; Knudsen, Thomas; Erstad, Brian L.; Luecke, Richard H.; Faustman, Elaine M.; Timchalk, Chuck; Mattison, Donald R.

    2005-07-26

    A workshop was conducted on November 18?19, 2004, to address the issue of improving predictive models for drug delivery to developing humans. Although considerable progress has been made for adult humans, large gaps remain for predicting pharmacokinetic/pharmacodynamic (PK/PD) outcome in children because most adult models have not been tested during development. The goals of the meeting included a description of when, during development, infants/children become adultlike in handling drugs. The issue of incorporating the most recent advances into the predictive models was also addressed: both the use of imaging approaches and genomic information were considered. Disease state, as exemplified by obesity, was addressed as a modifier of drug pharmacokinetics and pharmacodynamics during development. Issues addressed in this workshop should be considered in the development of new predictive and mechanistic models of drug kinetics and dynamics in the developing human.

  19. Moderate efficiency of clinicians' predictions decreased for blurred clinical conditions and benefits from the use of BRASS index. A longitudinal study on geriatric patients' outcomes.

    Science.gov (United States)

    Signorini, Giulia; Dagani, Jessica; Bulgari, Viola; Ferrari, Clarissa; de Girolamo, Giovanni

    2016-01-01

    Accurate prognosis is an essential aspect of good clinical practice and efficient health services, particularly for chronic and disabling diseases, as in geriatric populations. This study aims to examine the accuracy of clinical prognostic predictions and to devise prediction models combining clinical variables and clinicians' prognosis for a geriatric patient sample. In a sample of 329 consecutive older patients admitted to 10 geriatric units, we evaluated the accuracy of clinicians' prognosis regarding three outcomes at discharge: global functioning, length of stay (LoS) in hospital, and destination at discharge (DD). A comprehensive set of sociodemographic, clinical, and treatment-related information were also collected. Moderate predictive performance was found for all three outcomes: area under receiver operating characteristic curve of 0.79 and 0.78 for functioning and LoS, respectively, and moderate concordance, Cohen's K = 0.45, between predicted and observed DD. Predictive models found the Blaylock Risk Assessment Screening Score together with clinicians' judgment relevant to improve predictions for all outcomes (absolute improvement in adjusted and pseudo-R(2) up to 19%). Although the clinicians' estimates were important factors in predicting global functioning, LoS, and DD, more research is needed regarding both methodological aspects and clinical measurements, to improve prognostic clinical indices. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Energetics and efficiency of a molecular motor model

    International Nuclear Information System (INIS)

    Fogedby, Hans C; Svane, Axel

    2013-01-01

    The energetics and efficiency of a linear molecular motor model proposed by Mogilner et al are analyzed from an analytical point of view. The model, which is based on protein friction with a track, is described by coupled Langevin equations for the motion in combination with coupled master equations for the ATP hydrolysis. Here the energetics and efficiency of the motor are addressed using a many body scheme with focus on the efficiency at maximum power (EMP). It is found that the EMP is reduced from about 10% in a heuristic description of the motor to about 1 per mille when incorporating the full motor dynamics, owing to the strong dissipation associated with the motor action. (paper)

  1. Prediction and analysis of near-road concentrations using a reduced-form emission/dispersion model

    Directory of Open Access Journals (Sweden)

    Kononowech Robert

    2010-06-01

    Full Text Available Abstract Background Near-road exposures of traffic-related air pollutants have been receiving increased attention due to evidence linking emissions from high-traffic roadways to adverse health outcomes. To date, most epidemiological and risk analyses have utilized simple but crude exposure indicators, most typically proximity measures, such as the distance between freeways and residences, to represent air quality impacts from traffic. This paper derives and analyzes a simplified microscale simulation model designed to predict short- (hourly to long-term (annual average pollutant concentrations near roads. Sensitivity analyses and case studies are used to highlight issues in predicting near-road exposures. Methods Process-based simulation models using a computationally efficient reduced-form response surface structure and a minimum number of inputs integrate the major determinants of air pollution exposures: traffic volume and vehicle emissions, meteorology, and receptor location. We identify the most influential variables and then derive a set of multiplicative submodels that match predictions from "parent" models MOBILE6.2 and CALINE4. The assembled model is applied to two case studies in the Detroit, Michigan area. The first predicts carbon monoxide (CO concentrations at a monitoring site near a freeway. The second predicts CO and PM2.5 concentrations in a dense receptor grid over a 1 km2 area around the intersection of two major roads. We analyze the spatial and temporal patterns of pollutant concentration predictions. Results Predicted CO concentrations showed reasonable agreement with annual average and 24-hour measurements, e.g., 59% of the 24-hr predictions were within a factor of two of observations in the warmer months when CO emissions are more consistent. The highest concentrations of both CO and PM2.5 were predicted to occur near intersections and downwind of major roads during periods of unfavorable meteorology (e.g., low wind

  2. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to

  3. Efficient Modelling and Generation of Markov Automata

    NARCIS (Netherlands)

    Koutny, M.; Timmer, Mark; Ulidowski, I.; Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the

  4. An efficient ray tracing method for propagation prediction along a mobile route in urban environments

    Science.gov (United States)

    Hussain, S.; Brennan, C.

    2017-07-01

    This paper presents an efficient ray tracing algorithm for propagation prediction in urban environments. The work presented in this paper builds upon previous work in which the maximum coverage area where rays can propagate after interaction with a wall or vertical edge is described by a lit polygon. The shadow regions formed by buildings within the lit polygon are described by shadow polygons. In this paper, the lit polygons of images are mapped to a coarse grid superimposed over the coverage area. This mapping reduces the active image tree significantly for a given receiver point to accelerate the ray finding process. The algorithm also presents an efficient method of quickly determining the valid ray segments for a mobile receiver moving along a linear trajectory. The validation results show considerable computation time reduction with good agreement between the simulated and measured data for propagation prediction in large urban environments.

  5. Designing an artificial neural network using radial basis function to model exergetic efficiency of nanofluids in mini double pipe heat exchanger

    Science.gov (United States)

    Ghasemi, Nahid; Aghayari, Reza; Maddah, Heydar

    2018-06-01

    The present study aims at predicting and optimizing exergetic efficiency of TiO2-Al2O3/water nanofluid at different Reynolds numbers, volume fractions and twisted ratios using Artificial Neural Networks (ANN) and experimental data. Central Composite Design (CCD) and cascade Radial Basis Function (RBF) were used to display the significant levels of the analyzed factors on the exergetic efficiency. The size of TiO2-Al2O3/water nanocomposite was 20-70 nm. The parameters of ANN model were adapted by a training algorithm of radial basis function (RBF) with a wide range of experimental data set. Total mean square error and correlation coefficient were used to evaluate the results which the best result was obtained from double layer perceptron neural network with 30 neurons in which total Mean Square Error(MSE) and correlation coefficient (R2) were equal to 0.002 and 0.999, respectively. This indicated successful prediction of the network. Moreover, the proposed equation for predicting exergetic efficiency was extremely successful. According to the optimal curves, the optimum designing parameters of double pipe heat exchanger with inner twisted tape and nanofluid under the constrains of exergetic efficiency 0.937 are found to be Reynolds number 2500, twisted ratio 2.5 and volume fraction( v/v%) 0.05.

  6. An efficient model for auxiliary diagnosis of hepatocellular carcinoma based on gene expression programming.

    Science.gov (United States)

    Zhang, Li; Chen, Jiasheng; Gao, Chunming; Liu, Chuanmiao; Xu, Kuihua

    2018-03-16

    Hepatocellular carcinoma (HCC) is a leading cause of cancer-related death worldwide. The early diagnosis of HCC is greatly helpful to achieve long-term disease-free survival. However, HCC is usually difficult to be diagnosed at an early stage. The aim of this study was to create the prediction model to diagnose HCC based on gene expression programming (GEP). GEP is an evolutionary algorithm and a domain-independent problem-solving technique. Clinical data show that six serum biomarkers, including gamma-glutamyl transferase, C-reaction protein, carcinoembryonic antigen, alpha-fetoprotein, carbohydrate antigen 153, and carbohydrate antigen 199, are related to HCC characteristics. In this study, the prediction of HCC was made based on these six biomarkers (195 HCC patients and 215 non-HCC controls) by setting up optimal joint models with GEP. The GEP model discriminated 353 out of 410 subjects, representing a determination coefficient of 86.28% (283/328) and 85.37% (70/82) for training and test sets, respectively. Compared to the results from the support vector machine, the artificial neural network, and the multilayer perceptron, GEP showed a better outcome. The results suggested that GEP modeling was a promising and excellent tool in diagnosis of hepatocellular carcinoma, and it could be widely used in HCC auxiliary diagnosis. Graphical abstract The process to establish an efficient model for auxiliary diagnosis of hepatocellular carcinoma.

  7. Robust predictions of the interacting boson model

    International Nuclear Information System (INIS)

    Casten, R.F.; Koeln Univ.

    1994-01-01

    While most recognized for its symmetries and algebraic structure, the IBA model has other less-well-known but equally intrinsic properties which give unavoidable, parameter-free predictions. These predictions concern central aspects of low-energy nuclear collective structure. This paper outlines these ''robust'' predictions and compares them with the data

  8. A unified tool for performance modelling and prediction

    International Nuclear Information System (INIS)

    Gilmore, Stephen; Kloul, Leila

    2005-01-01

    We describe a novel performability modelling approach, which facilitates the efficient solution of performance models extracted from high-level descriptions of systems. The notation which we use for our high-level designs is the Unified Modelling Language (UML) graphical modelling language. The technology which provides the efficient representation capability for the underlying performance model is the multi-terminal binary decision diagram (MTBDD)-based PRISM probabilistic model checker. The UML models are compiled through an intermediate language, the stochastic process algebra PEPA, before translation into MTBDDs for solution. We illustrate our approach on a real-world analysis problem from the domain of mobile telephony

  9. Relationship among performance, carcass, and feed efficiency characteristics, and their ability to predict economic value in the feedlot.

    Science.gov (United States)

    Retallick, K M; Faulkner, D B; Rodriguez-Zas, S L; Nkrumah, J D; Shike, D W

    2013-12-01

    A 4-yr study was conducted using 736 steers of known Angus, Simmental, or Simmental × Angus genetics to determine performance, carcass, and feed efficiency factors that explained variation in economic performance. Steers were pen fed and individual DMI was recorded using a GrowSafe automated feeding system (GrowSafe Systems Ltd., Airdrie, Alberta, Canada). Steers consumed a similar diet and received similar management each year. The objectives of this study were to: 1) determine current economic value of feed efficiency and 2) identify performance, carcass, and feed efficiency characteristics that predict: carcass value, profit, cost of gain, and feed costs. Economic data used were from 2011 values. Feed efficiency values investigated were: feed conversion ratio (FCR; feed to gain), residual feed intake (RFI), residual BW gain (RG), and residual intake and BW gain (RIG). Dependent variables were carcass value ($/steer), profit ($/steer), feed costs ($/steer • d(-1)), and cost of gain ($/kg). Independent variables were year, DMI, ADG, HCW, LM area, marbling, yield grade, dam breed, and sire breed. A 10% improvement in RG (P Profit increased with a 10% improvement in feed efficiency (P profit. Eighty-five percent of the variation in cost of gain was explained by ADG, DMI, HCW, and year. Prediction equations were developed that excluded ADG and DMI, and included feed efficiency values. Using these equations, cost of gain was explained primarily by FCR (R(2) = 0.71). Seventy-three percent of profitability was explained, with 55% being accounted for by RG and marbling. These prediction equations represent the relative importance of factors contributing to economic success in feedlot cattle based on current prices.

  10. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  11. Thermomechanical fatigue life prediction of high temperature components

    Energy Technology Data Exchange (ETDEWEB)

    Seifert, Thomas; Hartrott, Philipp von; Riedel, Hermann; Siegele, Dieter [Fraunhofer-Inst. fuer Werkstoffmechanik (IWM), Freiburg (Germany)

    2009-07-01

    The aim of the work described in this paper is to provide a computational method for fatigue life prediction of high temperature components, in which the time and temperature dependent fatigue crack growth is a relevant damage mechanism. The fatigue life prediction is based on a law for microcrack growth and a fracture mechanics estimate of the cyclic crack tip opening displacement. In addition, a powerful model for nonisothermal cyclic plasticity is employed, and an efficient laboratory test procedure is proposed for the determination of the model parameters. The models are efficiently implemented into finite element programs and are used to predict the fatigue life of a cast iron exhaust manifold and a notch in the perimeter of a turbine rotor made of a ferritic/martensitic 10%-chromium steel. (orig.)

  12. Predictive modeling of liquid-sodium thermal–hydraulics experiments and computations

    International Nuclear Information System (INIS)

    Arslan, Erkan; Cacuci, Dan G.

    2014-01-01

    Highlights: • We applied the predictive modeling method of Cacuci and Ionescu-Bujor (2010). • We assimilated data from sodium flow experiments. • We used computational fluid dynamics simulations of sodium experiments. • The predictive modeling method greatly reduced uncertainties in predicted results. - Abstract: This work applies the predictive modeling procedure formulated by Cacuci and Ionescu-Bujor (2010) to assimilate data from liquid-sodium thermal–hydraulics experiments in order to reduce systematically the uncertainties in the predictions of computational fluid dynamics (CFD) simulations. The predicted CFD-results for the best-estimate model parameters and results describing sodium-flow velocities and temperature distributions are shown to be significantly more precise than the original computations and experiments, in that the predicted uncertainties for the best-estimate results and model parameters are significantly smaller than both the originally computed and the experimental uncertainties

  13. Interpreting Disruption Prediction Models to Improve Plasma Control

    Science.gov (United States)

    Parsons, Matthew

    2017-10-01

    In order for the tokamak to be a feasible design for a fusion reactor, it is necessary to minimize damage to the machine caused by plasma disruptions. Accurately predicting disruptions is a critical capability for triggering any mitigative actions, and a modest amount of attention has been given to efforts that employ machine learning techniques to make these predictions. By monitoring diagnostic signals during a discharge, such predictive models look for signs that the plasma is about to disrupt. Typically these predictive models are interpreted simply to give a `yes' or `no' response as to whether a disruption is approaching. However, it is possible to extract further information from these models to indicate which input signals are more strongly correlated with the plasma approaching a disruption. If highly accurate predictive models can be developed, this information could be used in plasma control schemes to make better decisions about disruption avoidance. This work was supported by a Grant from the 2016-2017 Fulbright U.S. Student Program, administered by the Franco-American Fulbright Commission in France.

  14. SU-E-T-479: Development and Validation of Analytical Models Predicting Secondary Neutron Radiation in Proton Therapy Applications

    International Nuclear Information System (INIS)

    Farah, J; Bonfrate, A; Donadille, L; Martinetti, F; Trompier, F; Clairand, I; De Olivera, A; Delacroix, S; Herault, J; Piau, S; Vabre, I

    2014-01-01

    Purpose: Test and validation of analytical models predicting leakage neutron exposure in passively scattered proton therapy. Methods: Taking inspiration from the literature, this work attempts to build an analytical model predicting neutron ambient dose equivalents, H*(10), within the local 75 MeV ocular proton therapy facility. MC simulations were first used to model H*(10) in the beam axis plane while considering a closed final collimator and pristine Bragg peak delivery. Next, MC-based analytical model was tested against simulation results and experimental measurements. The model was also expended in the vertical direction to enable a full 3D mapping of H*(10) inside the treatment room. Finally, the work focused on upgrading the literature model to clinically relevant configurations considering modulated beams, open collimators, patient-induced neutron fluctuations, etc. Results: The MC-based analytical model efficiently reproduced simulated H*(10) values with a maximum difference below 10%. In addition, it succeeded in predicting measured H*(10) values with differences <40%. The highest differences were registered at the closest and farthest positions from isocenter where the analytical model failed to faithfully reproduce the high neutron fluence and energy variations. The differences remains however acceptable taking into account the high measurement/simulation uncertainties and the end use of this model, i.e. radiation protection. Moreover, the model was successfully (differences < 20% on simulations and < 45% on measurements) extended to predict neutrons in the vertical direction with respect to the beam line as patients are in the upright seated position during ocular treatments. Accounting for the impact of beam modulation, collimation and the present of a patient in the beam path is far more challenging and conversion coefficients are currently being defined to predict stray neutrons in clinically representative treatment configurations. Conclusion

  15. Some Remarks on Prediction of Drug-Target Interaction with Network Models.

    Science.gov (United States)

    Zhang, Shao-Wu; Yan, Xiao-Ying

    2017-01-01

    System-level understanding of the relationships between drugs and targets is very important for enhancing drug research, especially for drug function repositioning. The experimental methods used to determine drug-target interactions are usually time-consuming, tedious and expensive, and sometimes lack reproducibility. Thus, it is highly desired to develop computational methods for efficiently and effectively analyzing and detecting new drug-target interaction pairs. With the explosive growth of different types of omics data, such as genome, pharmacology, phenotypic, and other kinds of molecular networks, numerous computational approaches have been developed to predict Drug-Target Interactions (DTI). In this review, we make a survey on the recent advances in predicting drug-target interaction with network-based models from the following aspects: i) Available public data sources and benchmark datasets; ii) Drug/target similarity metrics; iii) Network construction; iv) Common network algorithms; v) Performance comparison of existing network-based DTI predictors. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  16. Exploring the predictive power of interaction terms in a sophisticated risk equalization model using regression trees.

    Science.gov (United States)

    van Veen, S H C M; van Kleef, R C; van de Ven, W P M M; van Vliet, R C J A

    2018-02-01

    This study explores the predictive power of interaction terms between the risk adjusters in the Dutch risk equalization (RE) model of 2014. Due to the sophistication of this RE-model and the complexity of the associations in the dataset (N = ~16.7 million), there are theoretically more than a million interaction terms. We used regression tree modelling, which has been applied rarely within the field of RE, to identify interaction terms that statistically significantly explain variation in observed expenses that is not already explained by the risk adjusters in this RE-model. The interaction terms identified were used as additional risk adjusters in the RE-model. We found evidence that interaction terms can improve the prediction of expenses overall and for specific groups in the population. However, the prediction of expenses for some other selective groups may deteriorate. Thus, interactions can reduce financial incentives for risk selection for some groups but may increase them for others. Furthermore, because regression trees are not robust, additional criteria are needed to decide which interaction terms should be used in practice. These criteria could be the right incentive structure for risk selection and efficiency or the opinion of medical experts. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Efficient family-based model checking via variability abstractions

    DEFF Research Database (Denmark)

    Dimovski, Aleksandar; Al-Sibahi, Ahmad Salim; Brabrand, Claus

    2016-01-01

    with the abstract model checking of the concrete high-level variational model. This allows the use of Spin with all its accumulated optimizations for efficient verification of variational models without any knowledge about variability. We have implemented the transformations in a prototype tool, and we illustrate......Many software systems are variational: they can be configured to meet diverse sets of requirements. They can produce a (potentially huge) number of related systems, known as products or variants, by systematically reusing common parts. For variational models (variational systems or families...... of related systems), specialized family-based model checking algorithms allow efficient verification of multiple variants, simultaneously, in a single run. These algorithms, implemented in a tool Snip, scale much better than ``the brute force'' approach, where all individual systems are verified using...

  18. In silico modeling to predict drug-induced phospholipidosis

    International Nuclear Information System (INIS)

    Choi, Sydney S.; Kim, Jae S.; Valerio, Luis G.; Sadrieh, Nakissa

    2013-01-01

    Drug-induced phospholipidosis (DIPL) is a preclinical finding during pharmaceutical drug development that has implications on the course of drug development and regulatory safety review. A principal characteristic of drugs inducing DIPL is known to be a cationic amphiphilic structure. This provides evidence for a structure-based explanation and opportunity to analyze properties and structures of drugs with the histopathologic findings for DIPL. In previous work from the FDA, in silico quantitative structure–activity relationship (QSAR) modeling using machine learning approaches has shown promise with a large dataset of drugs but included unconfirmed data as well. In this study, we report the construction and validation of a battery of complementary in silico QSAR models using the FDA's updated database on phospholipidosis, new algorithms and predictive technologies, and in particular, we address high performance with a high-confidence dataset. The results of our modeling for DIPL include rigorous external validation tests showing 80–81% concordance. Furthermore, the predictive performance characteristics include models with high sensitivity and specificity, in most cases above ≥ 80% leading to desired high negative and positive predictivity. These models are intended to be utilized for regulatory toxicology applied science needs in screening new drugs for DIPL. - Highlights: • New in silico models for predicting drug-induced phospholipidosis (DIPL) are described. • The training set data in the models is derived from the FDA's phospholipidosis database. • We find excellent predictivity values of the models based on external validation. • The models can support drug screening and regulatory decision-making on DIPL

  19. Prediction of Protein Thermostability by an Efficient Neural Network Approach

    Directory of Open Access Journals (Sweden)

    Jalal Rezaeenour

    2016-10-01

    significantly improves the accuracy of ELM in prediction of thermostable enzymes. ELM tends to require more neurons in the hidden-layer than conventional tuning-based learning algorithms. To overcome these, the proposed approach uses a GA which optimizes the structure and the parameters of the ELM. In summary, optimization of ELM with GA results in an efficient prediction method; numerical experiments proved that our approach yields excellent results.

  20. Modelación de episodios críticos de contaminación por material particulado (PM10 en Santiago de Chile: Comparación de la eficiencia predictiva de los modelos paramétricos y no paramétricos Modeling critical episodes of air pollution by PM10 in Santiago, Chile: Comparison of the predictive efficiency of parametric and non-parametric statistical models

    Directory of Open Access Journals (Sweden)

    Sergio A. Alvarado

    2010-12-01

    Full Text Available Objetivo: Evaluar la eficiencia predictiva de modelos estadísticos paramétricos y no paramétricos para predecir episodios críticos de contaminación por material particulado PM10 del día siguiente, que superen en Santiago de Chile la norma de calidad diaria. Una predicción adecuada de tales episodios permite a la autoridad decretar medidas restrictivas que aminoren la gravedad del episodio, y consecuentemente proteger la salud de la comunidad. Método: Se trabajó con las concentraciones de material particulado PM10 registradas en una estación asociada a la red de monitorización de la calidad del aire MACAM-2, considerando 152 observaciones diarias de 14 variables, y con información meteorológica registrada durante los años 2001 a 2004. Se ajustaron modelos estadísticos paramétricos Gamma usando el paquete estadístico STATA v11, y no paramétricos usando una demo del software estadístico MARS v 2.0 distribuida por Salford-Systems. Resultados: Ambos métodos de modelación presentan una alta correlación entre los valores observados y los predichos. Los modelos Gamma presentan mejores aciertos que MARS para las concentraciones de PM10 con valores Objective: To evaluate the predictive efficiency of two statistical models (one parametric and the other non-parametric to predict critical episodes of air pollution exceeding daily air quality standards in Santiago, Chile by using the next day PM10 maximum 24h value. Accurate prediction of such episodes would allow restrictive measures to be applied by health authorities to reduce their seriousness and protect the community´s health. Methods: We used the PM10 concentrations registered by a station of the Air Quality Monitoring Network (152 daily observations of 14 variables and meteorological information gathered from 2001 to 2004. To construct predictive models, we fitted a parametric Gamma model using STATA v11 software and a non-parametric MARS model by using a demo version of Salford