WorldWideScience

Sample records for model prediction efficiency

  1. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain,...

  2. A Computationally Efficient Aggregation Optimization Strategy of Model Predictive Control

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Model Predictive Control (MPC) is a popular technique and has been successfully used in various industrial applications. However, the big drawback of MPC involved in the formidable on-line computational effort limits its applicability to relatively slow and/or small processes with a moderate number of inputs. This paper develops an aggregation optimization strategy for MPC that can improve the computational efficiency of MPC. For the regulation problem, an input decaying aggregation optimization algorithm is presented by aggregating all the original optimized variables on control horizon with the decaying sequence in respect of the current control action.

  3. An Efficient Deterministic Approach to Model-based Prediction Uncertainty

    Data.gov (United States)

    National Aeronautics and Space Administration — Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the...

  4. Multiple regression models for the prediction of the maximum obtainable thermal efficiency of organic Rankine cycles

    DEFF Research Database (Denmark)

    Larsen, Ulrik; Pierobon, Leonardo; Wronski, Jorrit;

    2014-01-01

    to power. In this study we propose four linear regression models to predict the maximum obtainable thermal efficiency for simple and recuperated ORCs. A previously derived methodology is able to determine the maximum thermal efficiency among many combinations of fluids and processes, given the boundary...

  5. Model Predictive Vibration Control Efficient Constrained MPC Vibration Control for Lightly Damped Mechanical Structures

    CERN Document Server

    Takács, Gergely

    2012-01-01

    Real-time model predictive controller (MPC) implementation in active vibration control (AVC) is often rendered difficult by fast sampling speeds and extensive actuator-deformation asymmetry. If the control of lightly damped mechanical structures is assumed, the region of attraction containing the set of allowable initial conditions requires a large prediction horizon, making the already computationally demanding on-line process even more complex. Model Predictive Vibration Control provides insight into the predictive control of lightly damped vibrating structures by exploring computationally efficient algorithms which are capable of low frequency vibration control with guaranteed stability and constraint feasibility. In addition to a theoretical primer on active vibration damping and model predictive control, Model Predictive Vibration Control provides a guide through the necessary steps in understanding the founding ideas of predictive control applied in AVC such as: ·         the implementation of ...

  6. Numerical flow simulation and efficiency prediction for axial turbines by advanced turbulence models

    Science.gov (United States)

    Jošt, D.; Škerlavaj, A.; Lipej, A.

    2012-11-01

    Numerical prediction of an efficiency of a 6-blade Kaplan turbine is presented. At first, the results of steady state analysis performed by different turbulence models for different operating regimes are compared to the measurements. For small and optimal angles of runner blades the efficiency was quite accurately predicted, but for maximal blade angle the discrepancy between calculated and measured values was quite large. By transient analysis, especially when the Scale Adaptive Simulation Shear Stress Transport (SAS SST) model with zonal Large Eddy Simulation (ZLES) in the draft tube was used, the efficiency was significantly improved. The improvement was at all operating points, but it was the largest for maximal discharge. The reason was better flow simulation in the draft tube. Details about turbulent structure in the draft tube obtained by SST, SAS SST and SAS SST with ZLES are illustrated in order to explain the reasons for differences in flow energy losses obtained by different turbulence models.

  7. Quantitative Regression Models for the Prediction of Chemical Properties by an Efficient Workflow.

    Science.gov (United States)

    Yin, Yongmin; Xu, Congying; Gu, Shikai; Li, Weihua; Liu, Guixia; Tang, Yun

    2015-10-01

    Rapid safety assessment is more and more needed for the increasing chemicals both in chemical industries and regulators around the world. The traditional experimental methods couldn't meet the current demand any more. With the development of the information technology and the growth of experimental data, in silico modeling has become a practical and rapid alternative for the assessment of chemical properties, especially for the toxicity prediction of organic chemicals. In this study, a quantitative regression workflow was built by KNIME to predict chemical properties. With this regression workflow, quantitative values of chemical properties can be obtained, which is different from the binary-classification model or multi-classification models that can only give qualitative results. To illustrate the usage of the workflow, two predictive models were constructed based on datasets of Tetrahymena pyriformis toxicity and Aqueous solubility. The qcv (2) and qtest (2) of 5-fold cross validation and external validation for both types of models were greater than 0.7, which implies that our models are robust and reliable, and the workflow is very convenient and efficient in prediction of various chemical properties. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Efficiency prediction for a low head bulb turbine with SAS SST and zonal LES turbulence models

    Science.gov (United States)

    Jošt, D.; Škerlavaj, A.

    2014-03-01

    A comparison between results of numerical simulations and measurements for a 3-blade bulb turbine is presented in order to determine an appropriate numerical setup for accurate and reliable simulations of flow in low head turbines. Numerical analysis was done for three angles of runner blades at two values of head. For the smallest blade angle the efficiency was quite accurately predicted, but for the optimal and maximal blade angles steady state analysis entirely failed to predict the efficiency due to underestimated torque on the shaft and incorrect results in the draft tube. Transient simulation with SST did not give satisfactory results, but with SAS and zonal LES models the prediction of efficiency was significantly improved. From the results obtained by SAS and zonal LES the interdependence between turbulence models, vortex structures in the flow, values of eddy viscosity and flow energy losses in the draft tube can be seen. Also the effect of using the bounded central differential scheme instead of the high resolution scheme was evident. To test the effect of grid density, simulations were performed on four grids. While a difference between results obtained on the basic grid and on the fine grid was small, the results obtained on the coarse grids were not satisfactory.

  9. Predicting input impedance and efficiency of graphene reconfigurable dipoles using a simple circuit model

    CERN Document Server

    Tamagnone, Michele

    2014-01-01

    An analytical circuit model able to predict the input impedance of reconfigurable graphene plasmonic dipoles is presented. A suitable definition of plasmonic characteristic impedance, employing natural currents, is used to for consistent modeling of the antenna-load connection in the circuit. In its purely analytical form, the model shows good agreement with full-wave simulations, and explains the remarkable tuning properties of graphene antennas. Furthermore, using a single full-wave simulation and scaling laws, additional parasitic elements can be determined for a vast parametric space, leading to very accurate modeling. Finally, we also show that the modeling approach allows fair estimation of radiation efficiency as well. The approach also applies to thin plasmonic antennas realized using noble metals or semiconductors.

  10. A branch scale analytical model for predicting the vegetation collection efficiency of ultrafine particles

    Science.gov (United States)

    Lin, M.; Katul, G. G.; Khlystov, A.

    2012-05-01

    The removal of ultrafine particles (UFP) by vegetation is now receiving significant attention given their role in cloud physics, human health and respiratory related diseases. Vegetation is known to be a sink for UFP, prompting interest in their collection efficiency. A number of models have tackled the UFP collection efficiency of an isolated leaf or a flat surface; however, up-scaling these theories to the ecosystem level has resisted complete theoretical treatment. To progress on a narrower scope of this problem, simultaneous experimental and theoretical investigations are carried out at the “intermediate” branch scale. Such a scale retains the large number of leaves and their interaction with the flow without the heterogeneities and added geometric complexities encountered within ecosystems. The experiments focused on the collection efficiencies of UFP in the size range 12.6-102 nm for pine and juniper branches in a wind tunnel facility. Scanning mobility particle sizers were used to measure the concentration of each diameter class of UFP upstream and downstream of the vegetation branches thereby allowing the determination of the UFP vegetation collection efficiencies. The UFP vegetation collection efficiency was measured at different wind speeds (0.3-1.5 m s-1), packing density (i.e. volume fraction of leaf or needle fibers; 0.017 and 0.040 for pine and 0.037, 0.055 for juniper), and branch orientations. These measurements were then used to investigate the performance of a proposed analytical model that predicts the branch-scale collection efficiency using conventional canopy properties such as the drag coefficient and leaf area density. Despite the numerous simplifications employed, the proposed analytical model agreed with the wind tunnel measurements mostly to within 20%. This analytical tractability can benefit future air quality and climate models incorporating UFP.

  11. An Efficient Constrained Model Predictive Control Algorithm Based on Approximate Computation

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The on-line computational burden related to model predictive control (MPC) of large-scale constrained systems hampers its real-time applications and limits it to slow dynamic process with moderate number of inputs. To avoid this, an efficient and fast algorithm based on aggregation optimization is proposed in this paper. It only optimizes the current control action at time instant k, while other future control sequences in the optimization horizon are approximated off-line by the linear feedback control sequence, so the on-line optimization can be converted into a low dimensional quadratic programming problem. Input constraints can be well handled in this scheme. The comparable performance is achieved with existing standard model predictive control algorithm. Simulation results well demonstrate its effectiveness.

  12. Efficient estimation and prediction for the Bayesian binary spatial model with flexible link functions.

    Science.gov (United States)

    Roy, Vivekananda; Evangelou, Evangelos; Zhu, Zhengyuan

    2016-03-01

    Spatial generalized linear mixed models (SGLMMs) are popular models for spatial data with a non-Gaussian response. Binomial SGLMMs with logit or probit link functions are often used to model spatially dependent binomial random variables. It is known that for independent binomial data, the robit regression model provides a more robust (against extreme observations) alternative to the more popular logistic and probit models. In this article, we introduce a Bayesian spatial robit model for spatially dependent binomial data. Since constructing a meaningful prior on the link function parameter as well as the spatial correlation parameters in SGLMMs is difficult, we propose an empirical Bayes (EB) approach for the estimation of these parameters as well as for the prediction of the random effects. The EB methodology is implemented by efficient importance sampling methods based on Markov chain Monte Carlo (MCMC) algorithms. Our simulation study shows that the robit model is robust against model misspecification, and our EB method results in estimates with less bias than full Bayesian (FB) analysis. The methodology is applied to a Celastrus Orbiculatus data, and a Rhizoctonia root data. For the former, which is known to contain outlying observations, the robit model is shown to do better for predicting the spatial distribution of an invasive species. For the latter, our approach is doing as well as the classical models for predicting the disease severity for a root disease, as the probit link is shown to be appropriate. Though this article is written for Binomial SGLMMs for brevity, the EB methodology is more general and can be applied to other types of SGLMMs. In the accompanying R package geoBayes, implementations for other SGLMMs such as Poisson and Gamma SGLMMs are provided.

  13. Incremental validity of positive orientation: predictive efficiency beyond the five-factor model

    Directory of Open Access Journals (Sweden)

    Łukasz Roland Miciuk

    2016-05-01

    Full Text Available Background The relation of positive orientation (a basic predisposition to think positively of oneself, one’s life and one’s future and personality traits is still disputable. The purpose of the described research was to verify the hypothesis that positive orientation has predictive efficiency beyond the five-factor model. Participants and procedure One hundred and thirty participants (at the mean age M = 24.84 completed the following questionnaires: the Self-Esteem Scale (SES, the Satisfaction with Life Scale (SWLS, the Life Orientation Test-Revised (LOT-R, the Positivity Scale (P-SCALE, the NEO Five Factor Inventory (NEO-FFI, the Self-Concept Clarity Scale (SCC, the Generalized Self-Efficacy Scale (GSES and the Life Engagement Test (LET. Results The introduction of positive orientation as an additional predictor in the second step of regression analyses led to better prediction of the following variables: purpose in life, self-concept clarity and generalized self-efficacy. This effect was the strongest for predicting purpose in life (i.e. 14% increment of the explained variance. Conclusions The results confirmed our hypothesis that positive orientation can be characterized by incremental validity – its inclusion in the regression model (in addition to the five main factors of personality increases the amount of explained variance. These findings may provide further evidence for the legitimacy of measuring positive orientation and personality traits separately.

  14. Stochastic DEA model with undesirable outputs: An application to the prediction of anti-HIV therapy efficiency

    Institute of Scientific and Technical Information of China (English)

    BIAN Fuping; TANG Xiaoqin

    2006-01-01

    This paper proposes a stochastic prediction DEA model with undesirable outputs and simplifies the process using chance constrained techniques in order to obtain an equivalent linear programming formulation. The existence and stability of the optimal solutions have been proved. And the model is used to describe and predict the efficiency of anti-HIV therapy in AIDS patients.

  15. An efficient numerical target strength prediction model: Validation against analysis solutions

    NARCIS (Netherlands)

    Fillinger, L.; Nijhof, M.J.J.; Jong, C.A.F. de

    2014-01-01

    A decade ago, TNO developed RASP (Rapid Acoustic Signature Prediction), a numerical model for the prediction of the target strength of immersed underwater objects. The model is based on Kirchhoff diffraction theory. It is currently being improved to model refraction, angle dependent reflection and t

  16. Improving Computational Efficiency of Model Predictive Control Genetic Algorithms for Real-Time Decision Support

    Science.gov (United States)

    Minsker, B. S.; Zimmer, A. L.; Ostfeld, A.; Schmidt, A.

    2014-12-01

    Enabling real-time decision support, particularly under conditions of uncertainty, requires computationally efficient algorithms that can rapidly generate recommendations. In this paper, a suite of model predictive control (MPC) genetic algorithms are developed and tested offline to explore their value for reducing CSOs during real-time use in a deep-tunnel sewer system. MPC approaches include the micro-GA, the probability-based compact GA, and domain-specific GA methods that reduce the number of decision variable values analyzed within the sewer hydraulic model, thus reducing algorithm search space. Minimum fitness and constraint values achieved by all GA approaches, as well as computational times required to reach the minimum values, are compared to large population sizes with long convergence times. Optimization results for a subset of the Chicago combined sewer system indicate that genetic algorithm variations with coarse decision variable representation, eventually transitioning to the entire range of decision variable values, are most efficient at addressing the CSO control problem. Although diversity-enhancing micro-GAs evaluate a larger search space and exhibit shorter convergence times, these representations do not reach minimum fitness and constraint values. The domain-specific GAs prove to be the most efficient and are used to test CSO sensitivity to energy costs, CSO penalties, and pressurization constraint values. The results show that CSO volumes are highly dependent on the tunnel pressurization constraint, with reductions of 13% to 77% possible with less conservative operational strategies. Because current management practices may not account for varying costs at CSO locations and electricity rate changes in the summer and winter, the sensitivity of the results is evaluated for variable seasonal and diurnal CSO penalty costs and electricity-related system maintenance costs, as well as different sluice gate constraint levels. These findings indicate

  17. Improving Computational Efficiency of Prediction in Model-based Prognostics Using the Unscented Transform

    Data.gov (United States)

    National Aeronautics and Space Administration — Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of...

  18. Efficient Control of Nonlinear Noise-Corrupted Systems Using a Novel Model Predictive Control Framework

    OpenAIRE

    Weissel, Florian; Huber, Marco F.; Hanebeck, Uwe D.

    2007-01-01

    Model identification and measurement acquisition is always to some degree uncertain. Therefore, a framework for Nonlinear Model Predictive Control (NMPC) is proposed that explicitly considers the noise influence on nonlinear dynamic systems with continuous state spaces and a finite set of control inputs in order to significantly increase the control quality. Integral parts of NMPC are the prediction of system states over a finite horizon as well as the problem specific modeling of reward func...

  19. Model-based evaluation of subsurface monitoring networks for improved efficiency and predictive certainty of regional groundwater models

    Science.gov (United States)

    Gosses, M. J.; Wöhling, Th.; Moore, C. R.; Dann, R.; Scott, D. M.; Close, M.

    2012-04-01

    Groundwater resources worldwide are increasingly under pressure. Demands from different local stakeholders add to the challenge of managing this resource. In response, groundwater models have become popular to make predictions about the impact of different management strategies and to estimate possible impacts of changes in climatic conditions. These models can assist to find optimal management strategies that comply with the various stakeholder needs. Observations of the states of the groundwater system are essential for the calibration and evaluation of groundwater flow models, particularly when they are used to guide the decision making process. On the other hand, installation and maintenance of observation networks are costly. Therefore it is important to design monitoring networks carefully and cost-efficiently. In this study, we analyse the Central Plains groundwater aquifer (~ 4000 km2) between the Rakaia and Waimakariri rivers on the Eastern side of the Southern Alps in New Zealand. The large sedimentary groundwater aquifer is fed by the two alpine rivers and by recharge from the land surface. The area is mainly under agricultural land use and large areas of the land are irrigated. The other major water use is the drinking water supply for the city of Christchurch. The local authority in the region, Environment Canterbury, maintains an extensive groundwater quantity and quality monitoring programme to monitor the effects of land use and discharges on groundwater quality, and the suitability of the groundwater for various uses, especially drinking-water supply. Current and projected irrigation water demand has raised concerns about possible impacts on groundwater-dependent lowland streams. We use predictive uncertainty analysis and the Central Plains steady-state groundwater flow model to evaluate the worth of pressure head observations in the existing groundwater well monitoring network. The data worth of particular observations is dependent on the problem

  20. ARCH Models Efficiency Evaluation in Prediction and Poultry Price Process Formation

    Directory of Open Access Journals (Sweden)

    Behzad Fakari Sardehae

    2016-09-01

    . This study shows that the heterogeneous variance exists in error term and indicated by LM-test. Results and Discussion: Results showed that stationary test of the poultry price has a unit root and is stationary with one lag difference, and thus the price of poultry was used in the study by one lag difference. Main results showed that ARCH is the best model for fluctuation prediction. Moreover, news has asymmetric effect on poultry price fluctuation and good news has a stronger effect on poultry price fluctuation than bad news and leverage effect doesnot existin poultry price. Moreover current fluctuation does not transmit to future. One of the main assumptions of time series models is constant variance in estimated coefficients. If this assumption has not been, the estimated coefficients for the correlation between the serial data would be biased and results in wrong interpretation. The results showed that ARCH effects existed in error terms of poultry price and so the ARCH family with student t distribution should be used. Normality test of error term and exam of heterogeneous variance needed and lack of attention to its cause false conclusion. Result showed that ARCH models have good predictive power and ARMA models are less efficient than ARCH models. It shows that non-linear predictions are better than linear prediction. According to the results that student distribution should be used as target distribution in estimated patterns. Conclusion: Huge need for poultry, require the creation of infrastructure to response to demands. Results showed that change in poultry price volatility over time, may intensifies at anytime. The asymmetric effect of good and bad news in poultry price leading to consumer's reaction. The good news had significant effects on the poultry market and created positive change in the poultry price, but the bad news did not result insignificant effects. In fact, because the poultry product in the household portfolio is essential, it should not

  1. An Efficient Modelling Approach for Prediction of Porosity Severity in Composite Structures

    Science.gov (United States)

    Bedayat, Houman; Forghani, Alireza; Hickmott, Curtis; Roy, Martin; Palmieri, Frank; Grimsley, Brian; Coxon, Brian; Fernlund, Goran

    2017-01-01

    Porosity, as a manufacturing process-induced defect, highly affects the mechanical properties of cured composites. Multiple phenomena affect the formation of porosity during the cure process. Porosity sources include entrapped air, volatiles and off-gassing as well as bag and tool leaks. Porosity sinks are the mechanisms that contribute to reducing porosity, including gas transport, void shrinkage and collapse as well as resin flow into void space. Despite the significant progress in porosity research, the fundamentals of porosity in composites are not yet fully understood. The highly coupled multi-physics and multi-scale nature of porosity make it a complicated problem to predict. Experimental evidence shows that resin pressure history throughout the cure cycle plays an important role in the porosity of the cured part. Maintaining high resin pressure results in void shrinkage and collapse keeps volatiles in solution thus preventing off-gassing and bubble formation. This study summarizes the latest development of an efficient FE modeling framework to simulate the gas and resin transport mechanisms that are among the major phenomena contributing to porosity.

  2. Modeling recombination processes and predicting energy conversion efficiency of dye sensitized solar cells from first principles

    Science.gov (United States)

    Ma, Wei; Meng, Sheng

    2014-03-01

    We present a set of algorithms based on solo first principles calculations, to accurately calculate key properties of a DSC device including sunlight harvest, electron injection, electron-hole recombination, and open circuit voltages. Two series of D- π-A dyes are adopted as sample dyes. The short circuit current can be predicted by calculating the dyes' photo absorption, and the electron injection and recombination lifetime using real-time time-dependent density functional theory (TDDFT) simulations. Open circuit voltage can be reproduced by calculating energy difference between the quasi-Fermi level of electrons in the semiconductor and the electrolyte redox potential, considering the influence of electron recombination. Based on timescales obtained from real time TDDFT dynamics for excited states, the estimated power conversion efficiency of DSC fits nicely with the experiment, with deviation below 1-2%. Light harvesting efficiency, incident photon-to-electron conversion efficiency and the current-voltage characteristics can also be well reproduced. The predicted efficiency can serve as either an ideal limit for optimizing photovoltaic performance of a given dye, or a virtual device that closely mimicking the performance of a real device under different experimental settings.

  3. An Efficient Implementation of Partial Condensing for Nonlinear Model Predictive Control

    DEFF Research Database (Denmark)

    Frison, Gianluca; Kouzoupis, Dimitris; Jørgensen, John Bagterp

    2016-01-01

    Partial (or block) condensing is a recently proposed technique to reformulate a Model Predictive Control (MPC) problem into a form more suitable for structure-exploiting Quadratic Programming (QP) solvers. It trades off horizon length for input vector size, and this degree of freedom can be emplo......Partial (or block) condensing is a recently proposed technique to reformulate a Model Predictive Control (MPC) problem into a form more suitable for structure-exploiting Quadratic Programming (QP) solvers. It trades off horizon length for input vector size, and this degree of freedom can...

  4. An efficient model for predicting mixing lengths in serial pumping of petroleum products

    Energy Technology Data Exchange (ETDEWEB)

    Baptista, Renan Martins [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Centro de Pesquisas. Div. de Explotacao]. E-mail: renan@cenpes.petrobras.com.br; Rachid, Felipe Bastos de Freitas [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Dept. de Engenharia Mecanica]. E-mail: rachid@mec.uff.br; Araujo, Jose Henrique Carneiro de [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Dept. de Ciencia da Computacao]. E-mail: jhca@dcc.ic.uff.br

    2000-07-01

    This paper presents a new model for estimating mixing volumes which arises in batching transfers in multi product pipelines. The novel features of the model are the incorporation of the flow rate variation with time and the use of a more precise effective dispersion coefficient, which is considered to depend on the concentration. The governing equation of the model forms a non linear initial value problem that is solved by using a predictor corrector finite difference method. A comparison among the theoretical predictions of the proposed model, a field test and other classical procedures show that it exhibits the best estimate over the whole range of admissible concentrations investigated. (author)

  5. Spatial extrapolation of light use efficiency model parameters to predict gross primary production

    Directory of Open Access Journals (Sweden)

    Karsten Schulz

    2011-12-01

    Full Text Available To capture the spatial and temporal variability of the gross primary production as a key component of the global carbon cycle, the light use efficiency modeling approach in combination with remote sensing data has shown to be well suited. Typically, the model parameters, such as the maximum light use efficiency, are either set to a universal constant or to land class dependent values stored in look-up tables. In this study, we employ the machine learning technique support vector regression to explicitly relate the model parameters of a light use efficiency model calibrated at several FLUXNET sites to site-specific characteristics obtained by meteorological measurements, ecological estimations and remote sensing data. A feature selection algorithm extracts the relevant site characteristics in a cross-validation, and leads to an individual set of characteristic attributes for each parameter. With this set of attributes, the model parameters can be estimated at sites where a parameter calibration is not possible due to the absence of eddy covariance flux measurement data. This will finally allow a spatially continuous model application. The performance of the spatial extrapolation scheme is evaluated with a cross-validation approach, which shows the methodology to be well suited to recapture the variability of gross primary production across the study sites.

  6. Model predictive control technologies for efficient and flexible power consumption in refrigeration systems

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F. S.; Edlund, Kristian;

    2012-01-01

    . In this paper we describe a novel economic-optimizing Model Predictive Control (MPC) scheme that reduces operating costs by utilizing the thermal storage capabilities. A nonlinear optimization tool to handle a non-convex cost function is utilized for simulations with validated scenarios. In this way we...... for the system itself, while crucial services can be delivered to a future flexible and intelligent power grid (Smart Grid). Furthermore, we discuss a novel incorporation of probabilistic constraints and Second Order Cone Programming (SOCP) with economic MPC. A Finite Impulse Response (FIR) formulation...... of the system models allows us to describe and handle model as well as prediction uncertainties in this framework. This means we can demonstrate means for robustifying the performance of the controller....

  7. A Machine Learning based Efficient Software Reusability Prediction Model for Java Based Object Oriented Software

    Directory of Open Access Journals (Sweden)

    Surbhi Maggo

    2014-01-01

    Full Text Available Software reuse refers to the development of new software systems with the likelihood of completely or partially using existing components or resources with or without modification. Reusability is the measure of the ease with which previously acquired concepts and objects can be used in new contexts. It is a promising strategy for improvements in software quality, productivity and maintainability as it provides for cost effective, reliable (with the consideration that prior testing and use has eliminated bugs and accelerated (reduced time to market development of the software products. In this paper we present an efficient automation model for the identification and evaluation of reusable software components to measure the reusability levels (high, medium or low of procedure oriented Java based (object oriented software systems. The presented model uses a metric framework for the functional analysis of the Object oriented software components that target essential attributes of reusability analysis also taking into consideration Maintainability Index to account for partial reuse. Further machine learning algorithm LMNN is explored to establish relationships between the functional attributes. The model works at functional level rather than at structural level. The system is implemented as a tool in Java and the performance of the automation tool developed is recorded using criteria like precision, recall, accuracy and error rate. The results gathered indicate that the model can be effectively used as an efficient, accurate, fast and economic model for the identification of procedure based reusable components from the existing inventory of software resources.

  8. A coupled kinematics-energetics model for predicting energy efficient flapping flight.

    Science.gov (United States)

    Salehipour, Hesam; Willis, David J

    2013-02-07

    A new computational model based on an optimal power, wake-only aerodynamics method is presented to predict the interdependency of energetics and kinematics in bird and bat flight. The model is divided into offline, intermediate and online modules. In the offline module, a four-dimensional design space sweep is performed (lift, thrust, flapping amplitude and flapping frequency). In the intermediate stage, the physical characteristics of the animal are introduced (wing span, mass, wing area, aspect ratio, etc.), and a series of amplitude-frequency response surfaces are constructed for all viable flight speeds. In the online component, the amplitude-frequency response surfaces are mined for the specific flapping motions being considered. The method is applied to several biological examples including a medium sized fruit bat (Cynopterus brachyotis), and two birds: a thrush nightingale (Luscinia luscinia) and a budgerigar (Melopsittacus undulatus). For each of these animals, the power and kinematics predictions are compared with available experimental data. These examples demonstrate that this new method can reasonably predict animal flight energetics and kinematics.

  9. Antigen profiling analysis of vaccinia virus injected canine tumors: oncolytic virus efficiency predicted by boolean models.

    Science.gov (United States)

    Cecil, Alexander; Gentschev, Ivaylo; Adelfinger, Marion; Nolte, Ingo; Dandekar, Thomas; Szalay, Aladar A

    2014-01-01

    Virotherapy on the basis of oncolytic vaccinia virus (VACV) strains is a novel approach for cancer therapy. In this study we describe for the first time the use of dynamic boolean modeling for tumor growth prediction of vaccinia virus GLV-1h68-injected canine tumors including canine mammary adenoma (ZMTH3), canine mammary carcinoma (MTH52c), canine prostate carcinoma (CT1258), and canine soft tissue sarcoma (STSA-1). Additionally, the STSA-1 xenografted mice were injected with either LIVP 1.1.1 or LIVP 5.1.1 vaccinia virus strains.   Antigen profiling data of the four different vaccinia virus-injected canine tumors were obtained, analyzed and used to calculate differences in the tumor growth signaling network by type and tumor type. Our model combines networks for apoptosis, MAPK, p53, WNT, Hedgehog, TK cell, Interferon, and Interleukin signaling networks. The in silico findings conform with in vivo findings of tumor growth. Boolean modeling describes tumor growth and remission semi-quantitatively with a good fit to the data obtained for all cancer type variants. At the same time it monitors all signaling activities as a basis for treatment planning according to antigen levels. Mitigation and elimination of VACV- susceptible tumor types as well as effects on the non-susceptible type CT1258 are predicted correctly. Thus the combination of Antigen profiling and semi-quantitative modeling optimizes the therapy already before its start.

  10. Neural and Hybrid Modeling: An Alternative Route to Efficiently Predict the Behavior of Biotechnological Processes Aimed at Biofuels Obtainment

    Science.gov (United States)

    Saraceno, Alessandra; Calabrò, Vincenza; Iorio, Gabriele

    2014-01-01

    The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved. PMID:24516363

  11. Neural and hybrid modeling: an alternative route to efficiently predict the behavior of biotechnological processes aimed at biofuels obtainment.

    Science.gov (United States)

    Curcio, Stefano; Saraceno, Alessandra; Calabrò, Vincenza; Iorio, Gabriele

    2014-01-01

    The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved.

  12. New efficient optimizing techniques for Kalman filters and numerical weather prediction models

    Science.gov (United States)

    Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis

    2016-06-01

    The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.

  13. Ligand efficiency-based support vector regression models for predicting bioactivities of ligands to drug target proteins.

    Science.gov (United States)

    Sugaya, Nobuyoshi

    2014-10-27

    The concept of ligand efficiency (LE) indices is widely accepted throughout the drug design community and is frequently used in a retrospective manner in the process of drug development. For example, LE indices are used to investigate LE optimization processes of already-approved drugs and to re-evaluate hit compounds obtained from structure-based virtual screening methods and/or high-throughput experimental assays. However, LE indices could also be applied in a prospective manner to explore drug candidates. Here, we describe the construction of machine learning-based regression models in which LE indices are adopted as an end point and show that LE-based regression models can outperform regression models based on pIC50 values. In addition to pIC50 values traditionally used in machine learning studies based on chemogenomics data, three representative LE indices (ligand lipophilicity efficiency (LLE), binding efficiency index (BEI), and surface efficiency index (SEI)) were adopted, then used to create four types of training data. We constructed regression models by applying a support vector regression (SVR) method to the training data. In cross-validation tests of the SVR models, the LE-based SVR models showed higher correlations between the observed and predicted values than the pIC50-based models. Application tests to new data displayed that, generally, the predictive performance of SVR models follows the order SEI > BEI > LLE > pIC50. Close examination of the distributions of the activity values (pIC50, LLE, BEI, and SEI) in the training and validation data implied that the performance order of the SVR models may be ascribed to the much higher diversity of the LE-based training and validation data. In the application tests, the LE-based SVR models can offer better predictive performance of compound-protein pairs with a wider range of ligand potencies than the pIC50-based models. This finding strongly suggests that LE-based SVR models are better than pIC50-based

  14. Efficient Prediction of Progesterone Receptor Interactome Using a Support Vector Machine Model

    Directory of Open Access Journals (Sweden)

    Ji-Long Liu

    2015-03-01

    Full Text Available Protein-protein interaction (PPI is essential for almost all cellular processes and identification of PPI is a crucial task for biomedical researchers. So far, most computational studies of PPI are intended for pair-wise prediction. Theoretically, predicting protein partners for a single protein is likely a simpler problem. Given enough data for a particular protein, the results can be more accurate than general PPI predictors. In the present study, we assessed the potential of using the support vector machine (SVM model with selected features centered on a particular protein for PPI prediction. As a proof-of-concept study, we applied this method to identify the interactome of progesterone receptor (PR, a protein which is essential for coordinating female reproduction in mammals by mediating the actions of ovarian progesterone. We achieved an accuracy of 91.9%, sensitivity of 92.8% and specificity of 91.2%. Our method is generally applicable to any other proteins and therefore may be of help in guiding biomedical experiments.

  15. Towards 12% stabilised efficiency in single junction polymorphous silicon solar cells: experimental developments and model predictions

    Directory of Open Access Journals (Sweden)

    Abolmasov Sergey

    2016-01-01

    Full Text Available We have combined recent experimental developments in our laboratory with modelling to devise ways of maximising the stabilised efficiency of hydrogenated amorphous silicon (a-Si:H PIN solar cells. The cells were fabricated using the conventional plasma enhanced chemical vapour deposition (PECVD technique at various temperatures, pressures and gas flow ratios. A detailed electrical-optical simulator was used to examine the effect of using wide band gap P-and N-doped μc-SiOx:H layers, as well as a MgF2 anti-reflection coating (ARC on cell performance. We find that with the best quality a-Si:H so far produced in our laboratory and optimised deposition parameters for the corresponding solar cell, we could not attain a 10% stabilised efficiency due to the high stabilised defect density of a-Si:H, although this landmark has been achieved in some laboratories. On the other hand, a close cousin of a-Si:H, hydrogenated polymorphous silicon (pm-Si:H, a nano-structured silicon thin film produced by PECVD under conditions close to powder formation, has been developed in our laboratory. This material has been shown to have a lower initial and stabilised defect density as well as higher hole mobility than a-Si:H. Modelling indicates that it is possible to attain stabilised efficiencies of 12% when pm-Si:H is incorporated in a solar cell, deposited in a NIP configuration to reduce the P/I interface defects and combined with P- and N-doped μc-SiOx:H layers and a MgF2 ARC.

  16. Towards 12% stabilised efficiency in single junction polymorphous silicon solar cells: experimental developments and model predictions

    Science.gov (United States)

    Abolmasov, Sergey; Cabarrocas, Pere Roca i.; Chatterjee, Parsathi

    2016-01-01

    We have combined recent experimental developments in our laboratory with modelling to devise ways of maximising the stabilised efficiency of hydrogenated amorphous silicon (a-Si:H) PIN solar cells. The cells were fabricated using the conventional plasma enhanced chemical vapour deposition (PECVD) technique at various temperatures, pressures and gas flow ratios. A detailed electrical-optical simulator was used to examine the effect of using wide band gap P-and N-doped μc-SiOx:H layers, as well as a MgF2 anti-reflection coating (ARC) on cell performance. We find that with the best quality a-Si:H so far produced in our laboratory and optimised deposition parameters for the corresponding solar cell, we could not attain a 10% stabilised efficiency due to the high stabilised defect density of a-Si:H, although this landmark has been achieved in some laboratories. On the other hand, a close cousin of a-Si:H, hydrogenated polymorphous silicon (pm-Si:H), a nano-structured silicon thin film produced by PECVD under conditions close to powder formation, has been developed in our laboratory. This material has been shown to have a lower initial and stabilised defect density as well as higher hole mobility than a-Si:H. Modelling indicates that it is possible to attain stabilised efficiencies of 12% when pm-Si:H is incorporated in a solar cell, deposited in a NIP configuration to reduce the P/I interface defects and combined with P- and N-doped μc-SiOx:H layers and a MgF2 ARC.

  17. An Efficient Piecewise Linear Model for Predicting Activity of Caspase-3 Inhibitors

    Directory of Open Access Journals (Sweden)

    Alireza Foroumadi

    2012-09-01

    Full Text Available Background and purpose of the study Multimodal distribution of descriptors makes it more difficult to fit a single global model to model the entire data set in quantitative structure activity relationship (QSAR studies.Methods:The linear (Multiple linear regression; MLR, non-linear (Artificial neural network; ANN, and an approach based on "Extended Classifier System in Function approximation" (XCSF were applied herein to model the biological activity of 658 caspase-3 inhibitors. Results:Various kinds of molecular descriptors were calculated to represent the molecular structures of the compounds. The original data set was partitioned into the training and test sets by the K-means classification method. Prediction error on the test data set indicated that the XCSF as a local model estimates caspase-3 inhibition activity, better than the global models such as MLR and ANN. The atom-centered fragment type CR2X2, electronegativity, polarizability, and atomic radius and also the lipophilicity of the molecule, were the main independent factors contributing to the caspase-3 inhibition activity. Conclusions:The results of this study may be exploited for further design of novel caspase-3 inhibitors.

  18. A cascaded QSAR model for efficient prediction of overall power conversion efficiency of all-organic dye-sensitized solar cells.

    Science.gov (United States)

    Li, Hongzhi; Zhong, Ziyan; Li, Lin; Gao, Rui; Cui, Jingxia; Gao, Ting; Hu, Li Hong; Lu, Yinghua; Su, Zhong-Min; Li, Hui

    2015-05-30

    A cascaded model is proposed to establish the quantitative structure-activity relationship (QSAR) between the overall power conversion efficiency (PCE) and quantum chemical molecular descriptors of all-organic dye sensitizers. The cascaded model is a two-level network in which the outputs of the first level (JSC, VOC, and FF) are the inputs of the second level, and the ultimate end-point is the overall PCE of dye-sensitized solar cells (DSSCs). The model combines quantum chemical methods and machine learning methods, further including quantum chemical calculations, data division, feature selection, regression, and validation steps. To improve the efficiency of the model and reduce the redundancy and noise of the molecular descriptors, six feature selection methods (multiple linear regression, genetic algorithms, mean impact value, forward selection, backward elimination, and +n-m algorithm) are used with the support vector machine. The best established cascaded model predicts the PCE values of DSSCs with a MAE of 0.57 (%), which is about 10% of the mean value PCE (5.62%). The validation parameters according to the OECD principles are R(2) (0.75), Q(2) (0.77), and Qcv2 (0.76), which demonstrate the great goodness-of-fit, predictivity, and robustness of the model. Additionally, the applicability domain of the cascaded QSAR model is defined for further application. This study demonstrates that the established cascaded model is able to effectively predict the PCE for organic dye sensitizers with very low cost and relatively high accuracy, providing a useful tool for the design of dye sensitizers with high PCE.

  19. Efficiency test of modeled empirical equations in predicting soil loss from ephemeral gully erosion around Mubi, Northeast Nigeria

    Directory of Open Access Journals (Sweden)

    Ijasini John Tekwa

    2016-03-01

    Full Text Available A field study was carried out to assess soil loss from ephemeral gully (EG erosion at 6 different locations (Digil, Vimtim, Muvur, Gella, Lamorde and Madanya around the Mubi area between April, 2008 and October, 2009. Each location consisted of 3 watershed sites from where data was collected. EG shape, land use, and conservation practices were noted, while EG length, width, and depth were measured. Physico-chemical properties of the soils were studied in the field and laboratory. Soil loss was both measured and predicted using modeled empirical equations. Results showed that the soils are heterogeneous and lying on flat to hilly topographies with few grasses, shrubs and tree vegetations. The soils comprised of sand fractions that predominated the texture, with considerable silt and clay contents. The empirical soil loss was generally related with the measured soil loss and the predictions were widely reliable at all sites, regardless of season. The measured and empirical aggregate soil loss were more related in terms of volume of soil loss (VSL (r2=0.93 and mass of soil loss (MSL (r2=0.92, than area of soil loss (ASL (r2=0.27. The empirical estimates of VSL and MSL were consistently higher at Muvur (less vegetation and lower at Madanya and Gella (denser vegetations in both years. The maximum efficiency (Mse of the empirical equation in predicting ASL was between 1.41 (Digil and 89.07 (Lamorde, while the Mse was higher at Madanya (2.56 and lowest at Vimtim (15.66 in terms of VSL prediction efficiencies. The Mse also ranged from 1.84 (Madanya to 15.74 (Vimtim in respect of MSL predictions. These results led to the recommendation that soil conservationists, farmers, private and/or government agencies should implement the empirical model in erosion studies around Mubi area.

  20. Plateletpheresis efficiency and mathematical correction of software-derived platelet yield prediction: A linear regression and ROC modeling approach.

    Science.gov (United States)

    Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David

    2017-10-01

    Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc.

  1. Efficiency of neural network-based combinatorial model predicting optimal culture conditions for maximum biomass yields in hairy root cultures.

    Science.gov (United States)

    Mehrotra, Shakti; Prakash, O; Khan, Feroz; Kukreja, A K

    2013-02-01

    KEY MESSAGE : ANN-based combinatorial model is proposed and its efficiency is assessed for the prediction of optimal culture conditions to achieve maximum productivity in a bioprocess in terms of high biomass. A neural network approach is utilized in combination with Hidden Markov concept to assess the optimal values of different environmental factors that result in maximum biomass productivity of cultured tissues after definite culture duration. Five hidden Markov models (HMMs) were derived for five test culture conditions, i.e. pH of liquid growth medium, volume of medium per culture vessel, sucrose concentration (%w/v) in growth medium, nitrate concentration (g/l) in the medium and finally the density of initial inoculum (g fresh weight) per culture vessel and their corresponding fresh weight biomass. The artificial neural network (ANN) model was represented as the function of these five Markov models, and the overall simulation of fresh weight biomass was done with this combinatorial ANN-HMM. The empirical results of Rauwolfia serpentina hairy roots were taken as model and compared with simulated results obtained from pure ANN and ANN-HMMs. The stochastic testing and Cronbach's α-value of pure and combinatorial model revealed more internal consistency and skewed character (0.4635) in histogram of ANN-HMM compared to pure ANN (0.3804). The simulated results for optimal conditions of maximum fresh weight production obtained from ANN-HMM and ANN model closely resemble the experimentally optimized culture conditions based on which highest fresh weight was obtained. However, only 2.99 % deviation from the experimental values could be observed in the values obtained from combinatorial model when compared to the pure ANN model (5.44 %). This comparison showed 45 % better potential of combinatorial model for the prediction of optimal culture conditions for the best growth of hairy root cultures.

  2. Deriving a light use efficiency model from eddy covariance flux data for predicting daily gross primary production across biomes

    Science.gov (United States)

    Yuan, W.; Liu, S.; Zhou, G.; Tieszen, L.L.; Baldocchi, D.; Bernhofer, C.; Gholz, H.; Goldstein, Allen H.; Goulden, M.L.; Hollinger, D.Y.; Hu, Y.; Law, B.E.; Stoy, Paul C.; Vesala, T.; Wofsy, S.C.

    2007-01-01

    The quantitative simulation of gross primary production (GPP) at various spatial and temporal scales has been a major challenge in quantifying the global carbon cycle. We developed a light use efficiency (LUE) daily GPP model from eddy covariance (EC) measurements. The model, called EC-LUE, is driven by only four variables: normalized difference vegetation index (NDVI), photosynthetically active radiation (PAR), air temperature, and the Bowen ratio of sensible to latent heat flux (used to calculate moisture stress). The EC-LUE model relies on two assumptions: First, that the fraction of absorbed PAR (fPAR) is a linear function of NDVI; Second, that the realized light use efficiency, calculated from a biome-independent invariant potential LUE, is controlled by air temperature or soil moisture, whichever is most limiting. The EC-LUE model was calibrated and validated using 24,349 daily GPP estimates derived from 28 eddy covariance flux towers from the AmeriFlux and EuroFlux networks, covering a variety of forests, grasslands and savannas. The model explained 85% and 77% of the observed variations of daily GPP for all the calibration and validation sites, respectively. A comparison with GPP calculated from the Moderate Resolution Imaging Spectroradiometer (MODIS) indicated that the EC-LUE model predicted GPP that better matched tower data across these sites. The realized LUE was predominantly controlled by moisture conditions throughout the growing season, and controlled by temperature only at the beginning and end of the growing season. The EC-LUE model is an alternative approach that makes it possible to map daily GPP over large areas because (1) the potential LUE is invariant across various land cover types and (2) all driving forces of the model can be derived from remote sensing data or existing climate observation networks.

  3. Efficient Implementation of Solvers for Linear Model Predictive Control on Embedded Devices

    DEFF Research Database (Denmark)

    Frison, Gianluca; Kwame Minde Kufoalor, D.; Imsland, Lars

    2014-01-01

    This paper proposes a novel approach for the efficient implementation of solvers for linear MPC on embedded devices. The main focus is to explain in detail the approach used to optimize the linear algebra for selected low-power embedded devices, and to show how the high-performance implementation...... of a single routine (the matrix-matrix multiplication gemm) can speed-up an interior-point method for linear MPC. The results show that the high-performance MPC obtained using the proposed approach is several times faster than the current state-of-the-art IP method for linear MPC on embedded devices....

  4. An efficient model for prediction of underwater noise due to pile driving at large ranges

    NARCIS (Netherlands)

    Nijhof, M.J.J.; Binnerts, B.; Jong, C.A.F. de; Ainslie, M.A.

    2014-01-01

    Modelling the sound levels in the water column due to pile driving operations nearby and out to large distances from the pile is crucial in assessing the likely impact on marine life. Standard numerical techniques for modelling the sound radiation from mechanical structures such as the finite elemen

  5. An efficient model for prediction of underwater noise due to pile driving at large ranges

    NARCIS (Netherlands)

    Nijhof, M.J.J.; Binnerts, B.; Jong, C.A.F. de; Ainslie, M.A.

    2014-01-01

    Modelling the sound levels in the water column due to pile driving operations nearby and out to large distances from the pile is crucial in assessing the likely impact on marine life. Standard numerical techniques for modelling the sound radiation from mechanical structures such as the finite elemen

  6. Efficiency of diagnostic model to predict recurrent suicidal incidents in diverse world communities

    Science.gov (United States)

    Vatsalya, Vatsalya; Chandras, Kan; Srivastava, Shweta; Karch, Robert

    2014-01-01

    Suicidal attempts have a very significant effect on the society, and they also reflect on the efforts of the supporting health care and counseling facilities; and the mental health professionals involved. The impact of suicide is further magnified by the needs of persons who attempt suicide multiple times, requiring emergency health care and rehabilitation. Preventing such activities becomes a major task for the support providing agencies as soon as patient with such tendencies are identified. There are repetitive traits that can be observed during the entire therapeutic program among the high-risk group individuals, who are susceptible to this kind of activity and such traits indicate for specific profiling. The aim of the instrument is to prevent the occurrence of the repetitive suicidal attempts of the patients in various world regions, which may have significantly higher and concerning suicide rates. This profile has been constructed on the various parameters recognized in the statistical analysis of the patient population, which have been identified or can be under treatment for their suicidal behavior. This instrument is developed to predict the probability of population segments who may attempt suicide and repetitively, by matching the parameters of the profile with that of the patient pool. Building a profile for the purpose of predicting behavior of this kind can strengthen the intervention strategies more comprehensively and reduce such incidents and health care requirements and expenses. PMID:25237407

  7. Bottles as models: predicting the effects of varying swimming speed and morphology on size selectivity and filtering efficiency in fishes.

    Science.gov (United States)

    Paig-Tran, E W Misty; Bizzarro, Joseph J; Strother, James A; Summers, Adam P

    2011-05-15

    We created physical models based on the morphology of ram suspension-feeding fishes to better understand the roles morphology and swimming speed play in particle retention, size selectivity and filtration efficiency during feeding events. We varied the buccal length, flow speed and architecture of the gills slits, including the number, size, orientation and pore size/permeability, in our models. Models were placed in a recirculating flow tank with slightly negatively buoyant plankton-like particles (~20-2000 μm) collected at the simulated esophagus and gill rakers to locate the highest density of particle accumulation. Particles were captured through sieve filtration, direct interception and inertial impaction. Changing the number of gill slits resulted in a change in the filtration mechanism of particles from a bimodal filter, with very small (≤ 50 μm) and very large (>1000 μm) particles collected, to a filter that captured medium-sized particles (101-1000 μm). The number of particles collected on the gill rakers increased with flow speed and skewed the size distribution towards smaller particles (51-500 μm). Small pore sizes (105 and 200 μm mesh size) had the highest filtration efficiencies, presumably because sieve filtration played a significant role. We used our model to make predictions about the filtering capacity and efficiency of neonatal whale sharks. These results suggest that the filtration mechanics of suspension feeding are closely linked to an animal's swimming speed and the structural design of the buccal cavity and gill slits.

  8. An Efficient Deterministic Approach to Model-based Prediction Uncertainty Estimation

    Science.gov (United States)

    2012-09-01

    always choose the end- points to determine the RUL bounds, however, in this case the UT does this automatically with the added benefit of be- ing able to...approaches for model-based prognostics. In Proceedings of the 2012 ieee aerospace conference. Edwards, D., Orchard , M. E., Tang, L., Goebel, K., & Vacht

  9. Distributed model predictive control of leader-follower systems using an interior point method with efficient computations

    OpenAIRE

    Necoara, Ion; Clipici, Dragos N.; Olaru, Sorin

    2013-01-01

    Standard model predictive control strategies imply the online computation of control inputs at each sampling instance, which traditionally limits this type of control scheme to systems with slow dynamics. This paper focuses on distributed model predictive control for large-scale systems comprised of interacting linear subsystems, where the online computations required for the control input can be distributed amongst them. A model predictive controller based on a distributed interior point met...

  10. Capability of the ‘Ball-Berry' model for predicting stomatal conductance and water use efficiency of potato leaves under different irrigation regimes

    DEFF Research Database (Denmark)

    Liu, Fulai; Andersen, Mathias Neumann; Jensen, Christian Richardt

    2009-01-01

    of soil water deficits on gs, a simple equation modifying the slope (m) based on the mean soil water potential (Ψs) in the soil columns was incorporated into the original BB-model. Compared with the original BB-model, the modified BB-model showed better predictability for both gs and WUE of potato leaves......The capability of the ‘Ball-Berry' model (BB-model) in predicting stomatal conductance (gs) and water use efficiency (WUE) of potato (Solanum tuberosum L.) leaves under different irrigation regimes was tested using data from two independent pot experiments in 2004 and 2007. Data obtained from 2004....... The simulation results showed that the modified BB-model better simulated gs for the NI and DI treatments than the original BB-model, whilst the two models performed equally well for predicting gs of the FI and PRD treatments. Although both models had poor predictability for WUE (0.47 

  11. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  12. Combining Microbial Enzyme Kinetics Models with Light Use Efficiency Models to Predict CO2 and CH4 Ecosystem Exchange from Flooded and Drained Peatland Systems

    Science.gov (United States)

    Oikawa, P. Y.; Jenerette, D.; Knox, S. H.; Sturtevant, C. S.; Verfaillie, J. G.; Baldocchi, D. D.

    2014-12-01

    Under California's Cap-and-Trade program, companies are looking to invest in land-use practices that will reduce greenhouse gas (GHG) emissions. The Sacramento-San Joaquin River Delta is a drained cultivated peatland system and a large source of CO2. To slow soil subsidence and reduce CO2 emissions, there is growing interest in converting drained peatlands to wetlands. However, wetlands are large sources of CH4 that could offset CO2-based GHG reductions. The goal of our research is to provide accurate measurements and model predictions of the changes in GHG budgets that occur when drained peatlands are restored to wetland conditions. We have installed a network of eddy covariance towers across multiple land use types in the Delta and have been measuring CO2 and CH4 ecosystem exchange for multiple years. In order to upscale these measurements through space and time we are using these data to parameterize and validate a process-based biogeochemical model. To predict gross primary productivity (GPP), we are using a simple light use efficiency (LUE) model which requires estimates of light, leaf area index and air temperature and can explain 90% of the observed variation in GPP in a mature wetland. To predict ecosystem respiration we have adapted the Dual Arrhenius Michaelis-Menten (DAMM) model. The LUE-DAMM model allows accurate simulation of half-hourly net ecosystem exchange (NEE) in a mature wetland (r2=0.85). We are working to expand the model to pasture, rice and alfalfa systems in the Delta. To predict methanogenesis, we again apply a modified DAMM model, using simple enzyme kinetics. However CH4 exchange is complex and we have thus expanded the model to predict not only microbial CH4 production, but also CH4 oxidation, CH4 storage and the physical processes regulating the release of CH4 to the atmosphere. The CH4-DAMM model allows accurate simulation of daily CH4 ecosystem exchange in a mature wetland (r2=0.55) and robust estimates of annual CH4 budgets. The LUE

  13. An efficient computational model to predict protonation at the amide nitrogen and reactivity along the C-N rotational pathway.

    Science.gov (United States)

    Szostak, Roman; Aubé, Jeffrey; Szostak, Michal

    2015-04-14

    N-Protonation of amides is critical in numerous biological processes, including amide bonds proteolysis and protein folding as well as in organic synthesis as a method to activate amide bonds towards unconventional reactivity. A computational model enabling prediction of protonation at the amide bond nitrogen atom along the C-N rotational pathway is reported. Notably, this study provides a blueprint for the rational design and application of amides with a controlled degree of rotation in synthetic chemistry and biology.

  14. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    Science.gov (United States)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCity

  15. Comparison of particle-wall interaction boundary conditions in the prediction of cyclone collection efficiency in computational fluid dynamics (CFD) modeling

    Energy Technology Data Exchange (ETDEWEB)

    Valverde Ramirez, M.; Coury, J.R.; Goncalves, J.A.S., E-mail: jasgon@ufscar.br [Universidade Federal de Sao Carlos (UFSCar), Sao Carlos, SP (Brazil). Departamento de Engenharia Quimica

    2009-07-01

    In recent years, many computational fluid dynamics (CFD) studies have appeared attempting to predict cyclone pressure drop and collection efficiency. While these studies have been able to predict pressure drop well, they have been only moderately successful in predicting collection efficiency. Part of the reason for this failure has been attributed to the relatively simple wall boundary conditions implemented in the commercially available CFD software, which are not capable of accurately describing the complex particle-wall interaction present in a cyclone. According, researches have proposed a number of different boundary conditions in order to improve the model performance. This work implemented the critical velocity boundary condition through a user defined function (UDF) in the Fluent software and compared its predictions both with experimental data and with the predictions obtained when using Fluent's built-in boundary conditions. Experimental data was obtained from eight laboratory scale cyclones with varying geometric ratios. The CFD simulations were made using the software Fluent 6.3.26. (author)

  16. Simulation and Optimization of Artificial Neural Network Modeling for Prediction of Sorption Efficiency of Nanocellulose Fibers for Removal of Cd (II Ions from Aqueous System

    Directory of Open Access Journals (Sweden)

    Abhishek KARDAM

    2014-06-01

    Full Text Available Simulation and optimization of an Artificial Neural Network (ANN for modeling biosorption studies of cadmium removal using nanocellulose fibers (NCFs was carried out. Experimental studies led to the standardization of the optimum conditions for the removal of cadmium ions i.e. biomass dosage (0.5 g, test volume (200 ml, metal concentration (25 mg/l, pH (6.5 and contact time (40 min. A Single layer ANN model was developed to simulate the process and to predict the sorption efficiency of Cd (II ions using NCFs. Different NN architectures were tested by varying network topology, resulting in excellent agreement between experiment outputs and ANN outputs. The findings indicated that ANN provided reasonable predictive performance for training, cross validation and testing data sets (R2 = 0.998, 0.995, 0.992. A sensitivity analysis was carried out to assess the influence of different independent parameters on the biosorption efficiency, and pH > biomass dosage > metal concentration > contact time > test volume were found to be the most significant factors. Simulations based on the developed ANN model can estimate the behavior of the biosorption phenomenon process under different experimental conditions.

  17. An efficient feedback active noise control algorithm based on reduced-order linear predictive modeling of FMRI acoustic noise.

    Science.gov (United States)

    Kannan, Govind; Milani, Ali A; Panahi, Issa M S; Briggs, Richard W

    2011-12-01

    Functional magnetic resonance imaging (fMRI) acoustic noise exhibits an almost periodic nature (quasi-periodicity) due to the repetitive nature of currents in the gradient coils. Small changes occur in the waveform in consecutive periods due to the background noise and slow drifts in the electroacoustic transfer functions that map the gradient coil waveforms to the measured acoustic waveforms. The period depends on the number of slices per second, when echo planar imaging (EPI) sequencing is used. Linear predictability of fMRI acoustic noise has a direct effect on the performance of active noise control (ANC) systems targeted to cancel the acoustic noise. It is shown that by incorporating some samples from the previous period, very high linear prediction accuracy can be reached with a very low order predictor. This has direct implications on feedback ANC systems since their performance is governed by the predictability of the acoustic noise to be cancelled. The low complexity linear prediction of fMRI acoustic noise developed in this paper is used to derive an effective and low-cost feedback ANC system.

  18. An Efficient 3D Stochastic Model for Predicting the Columnar-to-Equiaxed Transition in Alloy 718

    Science.gov (United States)

    Nastac, L.

    2015-06-01

    A three-dimensional (3D) stochastic model for simulating the evolution of dendritic crystals during the solidification of alloys was developed. The model includes time-dependent computations for temperature distribution, solute redistribution in the liquid and solid phases, curvature, and growth anisotropy. The 3D model can run on PCs with reasonable amount of RAM and CPU time. 3D stochastic mesoscopic simulations at the dendrite tip length scale were performed to simulate the evolution of the columnar-to-equiaxed transition in alloy 718. Comparisons between simulated microstructures and segregation patterns obtained with 2D and 3D stochastic models are also presented.

  19. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  20. Efficient Prediction of Surface Roughness Using Decision Tree

    Directory of Open Access Journals (Sweden)

    Manikant Kumar

    2016-12-01

    Full Text Available Surface roughness is a parameter which determines the quality of machined product. Now a days the general manufacturing problem can be described as the attainment of a predefined product quality with given equipment, cost and time constraints. So in recent years, a lot of extensive research work has been carried out for achieving predefined surface quality of machined product to eliminate wastage of over machining. Response surface methodology is used initially for prediction of surface roughness of machined part. After the introduction of artificial intelligent techniques many predictive model based on AI was developed by researchers because artificial intelligence technique is compatible with computer system and various microcontrollers. Researchers used fuzzy logic, artificial neural network, adaptive neuro-fuzzy inference system, genetic algorithm to develop predictive model for predicting surface roughness of different materials. Many researchers have developed ANN based predictive model because ANN outperforms other data mining techniques in certain scenarios like robustness and high learning accuracy of neural network. In this research work a new predictive model is proposed which is based on Decision tree. ANN and ANFIS are known as black box model in which only outcome of these predictive models are comprehensible but the same doesn’t hold true for understanding the internal operations. Decision tree is known as white box model because it provides a clear view of what is happening inside the model in the view of tree like structure. As use of decision tree held in the prediction of cancer that means it is very efficient method for prediction. At the end of this research work comparison of results obtained by ANN based model and Decision tree model will be carried out and a prediction methodology for roughness is introduced using decision tree along with ANN

  1. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    only present in melanoma patients and thus were strongly associated with melanoma. The percentage of correctly classified subjects in the LR model was 74.9%, sensitivity 71%, specificity 78.7% and AUC 0.805. For the ADT percentage of correctly classified instances was 71.9%, sensitivity 71.9%, specificity 79.4% and AUC 0.808. Conclusion. Application of different models for risk assessment and prediction of melanoma should provide efficient and standardized tool in the hands of clinicians. The presented models offer effective discrimination of individuals at high risk, transparent decision making and real-time implementation suitable for clinical practice. A continuous melanoma database growth would provide for further adjustments and enhancements in model accuracy as well as offering a possibility for successful application of more advanced data mining algorithms.

  2. Predictive models in urology.

    Science.gov (United States)

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  3. Efficient temporal and interlayer parameter prediction for weighted prediction in scalable high efficiency video coding

    Science.gov (United States)

    Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi

    2017-01-01

    Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.

  4. Impact of Thermostats on Folding and Aggregation Properties of Peptides Using the Optimized Potential for Efficient Structure Prediction Coarse-Grained Model.

    Science.gov (United States)

    Spill, Yannick G; Pasquali, Samuela; Derreumaux, Philippe

    2011-05-10

    The simulation of amyloid fibril formation is impossible if one takes into account all chemical details of the amino acids and their detailed interactions with the solvent. We investigate the folding and aggregation of two model peptides using the optimized potential for efficient structure prediction (OPEP) coarse-grained model and replica exchange molecular dynamics (REMD) simulations coupled with either the Langevin or the Berendsen thermostat. For both the monomer of blocked penta-alanine and the trimer of the 25-35 fragment of the Alzheimer's amyloid β protein, we find little variations in the equilibrium structures and heat capacity curves using the two thermostats. Despite this high similarity, we detect significant differences in the populations of the dominant conformations at low temperatures, whereas the configurational distributions remain the same in proximity of the melting temperature. Aβ25-35 trimers at 300 K have an averaged β-sheet content of 12% and are primarily characterized by fully disordered peptides or a small curved two-stranded β-sheet stabilized by a disordered peptide. In addition, OPEP molecular dynamics simulations of Aβ25-35 hexamers at 300 K with a small curved six-stranded antiparallel β-sheet do not show any extension of the β-sheet content. These data support the idea that the mechanism of Aβ25-35 amyloid formation does not result from a high fraction of extended β-sheet-rich trimers and hexamers.

  5. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... paper, we will present an introduction to the theory and application of MPC with Matlab codes written to ... model predictive control, linear systems, discrete-time systems, ... and then compute very rapidly for this open-loop con-.

  6. pcrEfficiency: a Web tool for PCR amplification efficiency prediction

    Directory of Open Access Journals (Sweden)

    Mallona Izaskun

    2011-10-01

    Full Text Available Abstract Background Relative calculation of differential gene expression in quantitative PCR reactions requires comparison between amplification experiments that include reference genes and genes under study. Ignoring the differences between their efficiencies may lead to miscalculation of gene expression even with the same starting amount of template. Although there are several tools performing PCR primer design, there is no tool available that predicts PCR efficiency for a given amplicon and primer pair. Results We have used a statistical approach based on 90 primer pair combinations amplifying templates from bacteria, yeast, plants and humans, ranging in size between 74 and 907 bp to identify the parameters that affect PCR efficiency. We developed a generalized additive model fitting the data and constructed an open source Web interface that allows the obtention of oligonucleotides optimized for PCR with predicted amplification efficiencies starting from a given sequence. Conclusions pcrEfficiency provides an easy-to-use web interface allowing the prediction of PCR efficiencies prior to web lab experiments thus easing quantitative real-time PCR set-up. A web-based service as well the source code are provided freely at http://srvgen.upct.es/efficiency.html under the GPL v2 license.

  7. Numerical model for predicting thermodynamic cycle and thermal efficiency of a beta-type Stirling engine with rhombic-drive mechanism

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chin-Hsiang; Yu, Ying-Ju [Department of Aeronautics and Astronautics, National Cheng Kung University, No. 1, Ta-Shieh Road, Tainan 70101, Taiwan (China)

    2010-11-15

    This study is aimed at development of a numerical model for a beta-type Stirling engine with rhombic-drive mechanism. By taking into account the non-isothermal effects, the effectiveness of the regenerative channel, and the thermal resistance of the heating head, the energy equations for the control volumes in the expansion chamber, the compression chamber, and the regenerative channel can be derived and solved. Meanwhile, a fully developed flow velocity profile in the regenerative channel, in terms of the reciprocating velocity of the displacer and the instantaneous pressure difference between the expansion and the compression chambers, is derived for calculation of the mass flow rate through the regenerative channel. In this manner, the internal irreversibility caused by pressure difference in the two chambers and the viscous shear effects due to the motion of the reciprocating displacer on the fluid flow in the regenerative channel gap are included. Periodic variation of pressures, volumes, temperatures, masses, and heat transfers in the expansion and the compression chambers are predicted. A parametric study of the dependence of the power output and thermal efficiency on the geometrical and physical parameters, involving regenerative gap, distance between two gears, offset distance from the crank to the center of gear, and the heat source temperature, has been performed. (author)

  8. SPAR Model Structural Efficiencies

    Energy Technology Data Exchange (ETDEWEB)

    John Schroeder; Dan Henry

    2013-04-01

    The Nuclear Regulatory Commission (NRC) and the Electric Power Research Institute (EPRI) are supporting initiatives aimed at improving the quality of probabilistic risk assessments (PRAs). Included in these initiatives are the resolution of key technical issues that are have been judged to have the most significant influence on the baseline core damage frequency of the NRC’s Standardized Plant Analysis Risk (SPAR) models and licensee PRA models. Previous work addressed issues associated with support system initiating event analysis and loss of off-site power/station blackout analysis. The key technical issues were: • Development of a standard methodology and implementation of support system initiating events • Treatment of loss of offsite power • Development of standard approach for emergency core cooling following containment failure Some of the related issues were not fully resolved. This project continues the effort to resolve outstanding issues. The work scope was intended to include substantial collaboration with EPRI; however, EPRI has had other higher priority initiatives to support. Therefore this project has addressed SPAR modeling issues. The issues addressed are • SPAR model transparency • Common cause failure modeling deficiencies and approaches • Ac and dc modeling deficiencies and approaches • Instrumentation and control system modeling deficiencies and approaches

  9. Nominal model predictive control

    OpenAIRE

    Grüne, Lars

    2013-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  10. Nominal Model Predictive Control

    OpenAIRE

    Grüne, Lars

    2014-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  11. Modeling plasmonic efficiency enhancement in organic photovoltaics.

    Science.gov (United States)

    Taff, Y; Apter, B; Katz, E A; Efron, U

    2015-09-10

    Efficiency enhancement of bulk heterojunction (BHJ) organic solar cells by means of the plasmonic effect is investigated by using finite-difference time-domain (FDTD) optical simulations combined with analytical modeling of exciton dissociation and charge transport efficiencies. The proposed method provides an improved analysis of the cell performance compared to previous FDTD studies. The results of the simulations predict an 11.8% increase in the cell's short circuit current with the use of Ag nano-hexagons.

  12. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  13. Efficient marker data utilization in genomic prediction

    DEFF Research Database (Denmark)

    Edriss, Vahid

    Genomic prediction is a novel method to recognize the best animals for breeding. The aim of this PhD is to improve the accuracy of genomic prediction in dairy cattle by effeiently utilizing marker data. The thesis focuses on three aspects for improving the genomc prediction, which are: criteria...

  14. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  15. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites

    DEFF Research Database (Denmark)

    Zhou, Yanlian; Wu, Xiaocui; Ju, Weimin;

    2015-01-01

    Light use efficiency (LUE) models are widely used to simulate gross primary production (GPP). However, the treatment of the plant canopy as a big leaf by these models can introduce large uncertainties in simulated GPP. Recently, a two-leaf light use efficiency (TL-LUE) model was developed...... to simulate GPP separately for sunlit and shaded leaves and has been shown to outperform the big-leaf MOD17 model at six FLUX sites in China. In this study we investigated the performance of the TL-LUE model for a wider range of biomes. For this we optimized the parameters and tested the TL-LUE model using...... data from 98 FLUXNET sites which are distributed across the globe. The results showed that the TL-LUE model performed in general better than the MOD17 model in simulating 8 day GPP. Optimized maximum light use efficiency of shaded leaves (epsilon(msh)) was 2.63 to 4.59 times that of sunlit leaves...

  16. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites: TL-LUE Parameterization and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Yanlian [Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, School of Geographic and Oceanographic Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Wu, Xiaocui [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Ju, Weimin [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Jiangsu Center for Collaborative Innovation in Geographic Information Resource Development and Application, Nanjing China; Chen, Jing M. [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Wang, Shaoqiang [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Wang, Huimin [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Yuan, Wenping [State Key Laboratory of Earth Surface Processes and Resource Ecology, Future Earth Research Institute, Beijing Normal University, Beijing China; Andrew Black, T. [Faculty of Land and Food Systems, University of British Columbia, Vancouver British Columbia Canada; Jassal, Rachhpal [Faculty of Land and Food Systems, University of British Columbia, Vancouver British Columbia Canada; Ibrom, Andreas [Department of Environmental Engineering, Technical University of Denmark (DTU), Kgs. Lyngby Denmark; Han, Shijie [Institute of Applied Ecology, Chinese Academy of Sciences, Shenyang China; Yan, Junhua [South China Botanical Garden, Chinese Academy of Sciences, Guangzhou China; Margolis, Hank [Centre for Forest Studies, Faculty of Forestry, Geography and Geomatics, Laval University, Quebec City Quebec Canada; Roupsard, Olivier [CIRAD-Persyst, UMR Ecologie Fonctionnelle and Biogéochimie des Sols et Agroécosystèmes, SupAgro-CIRAD-INRA-IRD, Montpellier France; CATIE (Tropical Agricultural Centre for Research and Higher Education), Turrialba Costa Rica; Li, Yingnian [Northwest Institute of Plateau Biology, Chinese Academy of Sciences, Xining China; Zhao, Fenghua [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Kiely, Gerard [Environmental Research Institute, Civil and Environmental Engineering Department, University College Cork, Cork Ireland; Starr, Gregory [Department of Biological Sciences, University of Alabama, Tuscaloosa Alabama USA; Pavelka, Marian [Laboratory of Plants Ecological Physiology, Institute of Systems Biology and Ecology AS CR, Prague Czech Republic; Montagnani, Leonardo [Forest Services, Autonomous Province of Bolzano, Bolzano Italy; Faculty of Sciences and Technology, Free University of Bolzano, Bolzano Italy; Wohlfahrt, Georg [Institute for Ecology, University of Innsbruck, Innsbruck Austria; European Academy of Bolzano, Bolzano Italy; D' Odorico, Petra [Grassland Sciences Group, Institute of Agricultural Sciences, ETH Zurich Switzerland; Cook, David [Atmospheric and Climate Research Program, Environmental Science Division, Argonne National Laboratory, Argonne Illinois USA; Arain, M. Altaf [McMaster Centre for Climate Change and School of Geography and Earth Sciences, McMaster University, Hamilton Ontario Canada; Bonal, Damien [INRA Nancy, UMR EEF, Champenoux France; Beringer, Jason [School of Earth and Environment, The University of Western Australia, Crawley Australia; Blanken, Peter D. [Department of Geography, University of Colorado Boulder, Boulder Colorado USA; Loubet, Benjamin [UMR ECOSYS, INRA, AgroParisTech, Université Paris-Saclay, Thiverval-Grignon France; Leclerc, Monique Y. [Department of Crop and Soil Sciences, College of Agricultural and Environmental Sciences, University of Georgia, Athens Georgia USA; Matteucci, Giorgio [Viea San Camillo Ed LellisViterbo, University of Tuscia, Viterbo Italy; Nagy, Zoltan [MTA-SZIE Plant Ecology Research Group, Szent Istvan University, Godollo Hungary; Olejnik, Janusz [Meteorology Department, Poznan University of Life Sciences, Poznan Poland; Department of Matter and Energy Fluxes, Global Change Research Center, Brno Czech Republic; Paw U, Kyaw Tha [Department of Land, Air and Water Resources, University of California, Davis California USA; Joint Program on the Science and Policy of Global Change, Massachusetts Institute of Technology, Cambridge USA; Varlagin, Andrej [A.N. Severtsov Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow Russia

    2016-04-06

    Light use efficiency (LUE) models are widely used to simulate gross primary production (GPP). However, the treatment of the plant canopy as a big leaf by these models can introduce large uncertainties in simulated GPP. Recently, a two-leaf light use efficiency (TL-LUE) model was developed to simulate GPP separately for sunlit and shaded leaves and has been shown to outperform the big-leaf MOD17 model at 6 FLUX sites in China. In this study we investigated the performance of the TL-LUE model for a wider range of biomes. For this we optimized the parameters and tested the TL-LUE model using data from 98 FLUXNET sites which are distributed across the globe. The results showed that the TL-LUE model performed in general better than the MOD17 model in simulating 8-day GPP. Optimized maximum light use efficiency of shaded leaves (εmsh) was 2.63 to 4.59 times that of sunlit leaves (εmsu). Generally, the relationships of εmsh and εmsu with εmax were well described by linear equations, indicating the existence of general patterns across biomes. GPP simulated by the TL-LUE model was much less sensitive to biases in the photosynthetically active radiation (PAR) input than the MOD17 model. The results of this study suggest that the proposed TL-LUE model has the potential for simulating regional and global GPP of terrestrial ecosystems and it is more robust with regard to usual biases in input data than existing approaches which neglect the bi-modal within-canopy distribution of PAR.

  17. Prediction aluminum corrosion inhibitor efficiency using artificial neural network (ANN)

    Science.gov (United States)

    Ebrahimi, Sh; Kalhor, E. G.; Nabavi, S. R.; Alamiparvin, L.; Pogaku, R.

    2016-06-01

    In this study, activity of some Schiff bases as aluminum corrosion inhibitor was investigated using artificial neural network (ANN). Hence, corrosion inhibition efficiency of Schiff bases (in any type) were gathered from different references. Then these molecules were drawn and optimized in Hyperchem software. Molecular descriptors generating and descriptors selection were fulfilled by Dragon software and principal component analysis (PCA) method, respectively. These structural descriptors along with environmental descriptors (ambient temperature, time of exposure, pH and the concentration of inhibitor) were used as input variables. Furthermore, aluminum corrosion inhibition efficiency was used as output variable. Experimental data were split into three sets: training set (for model building) and test set (for model validation) and simulation (for general model). Modeling was performed by Multiple linear regression (MLR) methods and artificial neural network (ANN). The results obtained in linear models showed poor correlation between experimental and theoretical data. However nonlinear model presented satisfactory results. Higher correlation coefficient of ANN (R > 0.9) revealed that ANN can be successfully applied for prediction of aluminum corrosion inhibitor efficiency of Schiff bases in different environmental conditions.

  18. Predicting the efficiency of deposit removal during filter backwash ...

    African Journals Online (AJOL)

    Predicting the efficiency of deposit removal during filter backwash. ... Abstract. The long-term performance of granular media filters used in drinking water treatment is ultimately limited by the efficiency of the backwash process. ... Article Metrics.

  19. Relationship between efficiency and predictability in stock price change

    Science.gov (United States)

    Eom, Cheoljun; Oh, Gabjin; Jung, Woo-Sung

    2008-09-01

    In this study, we evaluate the relationship between efficiency and predictability in the stock market. The efficiency, which is the issue addressed by the weak-form efficient market hypothesis, is calculated using the Hurst exponent and the approximate entropy (ApEn). The predictability corresponds to the hit-rate; this is the rate of consistency between the direction of the actual price change and that of the predicted price change, as calculated via the nearest neighbor prediction method. We determine that the Hurst exponent and the ApEn value are negatively correlated. However, predictability is positively correlated with the Hurst exponent.

  20. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  1. Efficient Finite Element Modelling of Elastodynamic Scattering

    Science.gov (United States)

    Velichko, A.; Wilcox, P. D.

    2010-02-01

    A robust and efficient technique for predicting the complete scattering behavior for an arbitrarily-shaped defect is presented that can be implemented in a commercial FE package. The spatial size of the modeling domain around the defect is as small as possible to minimize computational expense and a minimum number of models are executed. Example results for 2D and 3D scattering in isotropic material and guided wave scattering are presented.

  2. Information Aggregation Efficiency of Prediction Markets

    NARCIS (Netherlands)

    S. Yang (Shengyun)

    2014-01-01

    markdownabstract__Abstract__ The increased complexity of the business environment, such as globalization of the market, faster introduction of new products, more interdependencies among firms and financial crises, has reduced the forecasting accuracy of conventional prediction methods based on hist

  3. Information Aggregation Efficiency of Prediction Markets

    NARCIS (Netherlands)

    S. Yang (Shengyun)

    2014-01-01

    markdownabstract__Abstract__ The increased complexity of the business environment, such as globalization of the market, faster introduction of new products, more interdependencies among firms and financial crises, has reduced the forecasting accuracy of conventional prediction methods based on

  4. A study on an efficient prediction of welding deformation for T-joint laser welding of sandwich panel PART I : Proposal of a heat source model

    Science.gov (United States)

    Kim, Jae Woong; Jang, Beom Seon; Kim, Yong Tai; Chun, Kwang San

    2013-09-01

    The use of I-Core sandwich panel has increased in cruise ship deck structure since it can provide similar bending strength with conventional stiffened plate while keeping lighter weight and lower web height. However, due to its thin plate thickness, i.e. about 4~6 mm at most, it is assembled by high power CO2 laser welding to minimize the welding deformation. This research proposes a volumetric heat source model for T-joint of the I-Core sandwich panel and a method to use shell element model for a thermal elasto-plastic analysis to predict welding deformation. This paper, Part I, focuses on the heat source model. A circular cone type heat source model is newly suggested in heat transfer analysis to realize similar melting zone with that observed in experiment. An additional suggestion is made to consider negative defocus, which is commonly applied in T-joint laser welding since it can provide deeper penetration than zero defocus. The proposed heat source is also verified through 3D thermal elasto-plastic analysis to compare welding deformation with experimental results. A parametric study for different welding speeds, defocus values, and welding powers is performed to investigate the effect on the melting zone and welding deformation. In Part II, focuses on the proposed method to employ shell element model to predict welding deformation in thermal elasto-plastic analysis instead of solid element model.

  5. Structural network efficiency predicts conversion to dementia

    NARCIS (Netherlands)

    Tuladhar, A.M.; Uden, I.W.M. van; Rutten-Jacobs, L.C.A.; Lawrence, A.; Holst, H. van der; Norden, A.G.W. van; Laat, K.F. de; Dijk, E.J. van; Claassen, J.A.H.R.; Kessels, R.P.C.; Markus, H.S.; Norris, D.G.; Leeuw, H.F. de

    2016-01-01

    Objective: To examine whether structural network connectivity at baseline predicts incident all-cause dementia in a prospective hospital-based cohort of elderly participants with MRI evidence of small vessel disease (SVD). Methods: A total of 436 participants from the Radboud University Nijmegen Dif

  6. Disruption of Pseudomonas putida by high pressure homogenization: a comparison of the predictive capacity of three process models for the efficient release of arginine deiminase.

    Science.gov (United States)

    Patil, Mahesh D; Patel, Gopal; Surywanshi, Balaji; Shaikh, Naeem; Garg, Prabha; Chisti, Yusuf; Banerjee, Uttam Chand

    2016-12-01

    Disruption of Pseudomonas putida KT2440 by high-pressure homogenization in a French press is discussed for the release of arginine deiminase (ADI). The enzyme release response of the disruption process was modelled for the experimental factors of biomass concentration in the broth being disrupted, the homogenization pressure and the number of passes of the cell slurry through the homogenizer. For the same data, the response surface method (RSM), the artificial neural network (ANN) and the support vector machine (SVM) models were compared for their ability to predict the performance parameters of the cell disruption. The ANN model proved to be best for predicting the ADI release. The fractional disruption of the cells was best modelled by the RSM. The fraction of the cells disrupted depended mainly on the operating pressure of the homogenizer. The concentration of the biomass in the slurry was the most influential factor in determining the total protein release. Nearly 27 U/mL of ADI was released within a single pass from slurry with a biomass concentration of 260 g/L at an operating pressure of 510 bar. Using a biomass concentration of 100 g/L, the ADI release by French press was 2.7-fold greater than in a conventional high-speed bead mill. In the French press, the total protein release was 5.8-fold more than in the bead mill. The statistical analysis of the completely unseen data exhibited ANN and SVM modelling as proficient alternatives to RSM for the prediction and generalization of the cell disruption process in French press.

  7. Functional brain network efficiency predicts intelligence.

    Science.gov (United States)

    Langer, Nicolas; Pedroni, Andreas; Gianotti, Lorena R R; Hänggi, Jürgen; Knoch, Daria; Jäncke, Lutz

    2012-06-01

    The neuronal causes of individual differences in mental abilities such as intelligence are complex and profoundly important. Understanding these abilities has the potential to facilitate their enhancement. The purpose of this study was to identify the functional brain network characteristics and their relation to psychometric intelligence. In particular, we examined whether the functional network exhibits efficient small-world network attributes (high clustering and short path length) and whether these small-world network parameters are associated with intellectual performance. High-density resting state electroencephalography (EEG) was recorded in 74 healthy subjects to analyze graph-theoretical functional network characteristics at an intracortical level. Ravens advanced progressive matrices were used to assess intelligence. We found that the clustering coefficient and path length of the functional network are strongly related to intelligence. Thus, the more intelligent the subjects are the more the functional brain network resembles a small-world network. We further identified the parietal cortex as a main hub of this resting state network as indicated by increased degree centrality that is associated with higher intelligence. Taken together, this is the first study that substantiates the neural efficiency hypothesis as well as the Parieto-Frontal Integration Theory (P-FIT) of intelligence in the context of functional brain network characteristics. These theories are currently the most established intelligence theories in neuroscience. Our findings revealed robust evidence of an efficiently organized resting state functional brain network for highly productive cognitions.

  8. Computationally efficient prediction of area per lipid

    DEFF Research Database (Denmark)

    Chaban, Vitaly V.

    2014-01-01

    Area per lipid (APL) is an important property of biological and artificial membranes. Newly constructed bilayers are characterized by their APL and newly elaborated force fields must reproduce APL. Computer simulations of APL are very expensive due to slow conformational dynamics. The simulated d....... Thus, sampling times to predict accurate APL are reduced by a factor of 10. (C) 2014 Elsevier B.V. All rights reserved....

  9. Efficient prediction of (p,n) yields

    Energy Technology Data Exchange (ETDEWEB)

    Swift, D C; McNaney, J M; Higginson, D P; Beg, F

    2009-09-09

    In the continuous deceleration approximation, charged particles decelerate without any spread in energy as they traverse matter. This approximation simplifies the calculation of the yield of nuclear reactions, for which the cross-section depends on the particle energy. We calculated (p,n) yields for a LiF target, using the Bethe-Bloch relation for proton deceleration, and predicted that the maximum yield would be around 0.25% neutrons per incident proton, for an initial proton energy of 70 MeV or higher. Yield-energy relations calculated in this way can readily be used to optimize source and (p,n) converter characteristics.

  10. Efficient Global Aerodynamic Modeling from Flight Data

    Science.gov (United States)

    Morelli, Eugene A.

    2012-01-01

    A method for identifying global aerodynamic models from flight data in an efficient manner is explained and demonstrated. A novel experiment design technique was used to obtain dynamic flight data over a range of flight conditions with a single flight maneuver. Multivariate polynomials and polynomial splines were used with orthogonalization techniques and statistical modeling metrics to synthesize global nonlinear aerodynamic models directly and completely from flight data alone. Simulation data and flight data from a subscale twin-engine jet transport aircraft were used to demonstrate the techniques. Results showed that global multivariate nonlinear aerodynamic dependencies could be accurately identified using flight data from a single maneuver. Flight-derived global aerodynamic model structures, model parameter estimates, and associated uncertainties were provided for all six nondimensional force and moment coefficients for the test aircraft. These models were combined with a propulsion model identified from engine ground test data to produce a high-fidelity nonlinear flight simulation very efficiently. Prediction testing using a multi-axis maneuver showed that the identified global model accurately predicted aircraft responses.

  11. Predictive Models for Music

    OpenAIRE

    Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy

    2008-01-01

    Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...

  12. Prediction of Giant Thermoelectric Efficiency in Crystals with Interlaced Nanostructure.

    Science.gov (United States)

    Puzyrev, Y S; Shen, X; Pantelides, S T

    2016-01-13

    We present a theoretical study of the thermoelectric efficiency of "interlaced crystals", recently discovered in hexagonal-CuInS2 nanoparticles. Interlaced crystals are I-III-VI2 or II-IV-V2 tetrahedrally bonded compounds. They have a perfect Bravais lattice in which the two cations have an infinite set of possible ordering patterns within the cation sublattice. The material comprises nanoscale interlaced domains and phases with corresponding boundaries. Here we employ density functional theory and large-scale molecular dynamics calculations based on model classical potentials to demonstrate that the phase and domain boundaries are effective phonon scatterers and greatly suppress thermal conductivity. However, the absence of both structural defects and strain in the interlaced material results in a minimal effect on electronic properties. We predict an increase of thermal resistivity of up to 2 orders of magnitude, which makes interlaced crystals an exceptional candidate for thermoelectric applications.

  13. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Dani...

  14. Intelligent Prediction of Sieving Efficiency in Vibrating Screens

    Directory of Open Access Journals (Sweden)

    Bin Zhang

    2016-01-01

    Full Text Available In order to effectively predict the sieving efficiency of a vibrating screen, experiments to investigate the sieving efficiency were carried out. Relation between sieving efficiency and other working parameters in a vibrating screen such as mesh aperture size, screen length, inclination angle, vibration amplitude, and vibration frequency was analyzed. Based on the experiments, least square support vector machine (LS-SVM was established to predict the sieving efficiency, and adaptive genetic algorithm and cross-validation algorithm were used to optimize the parameters in LS-SVM. By the examination of testing points, the prediction performance of least square support vector machine is better than that of the existing formula and neural network, and its average relative error is only 4.2%.

  15. Efficient Turbulence Modeling for CFD Wake Simulations

    DEFF Research Database (Denmark)

    van der Laan, Paul

    , that can accurately and efficiently simulate wind turbine wakes. The linear k-ε eddy viscosity model (EVM) is a popular turbulence model in RANS; however, it underpredicts the velocity wake deficit and cannot predict the anisotropic Reynolds-stresses in the wake. In the current work, nonlinear eddy...... viscosity models (NLEVM) are applied to wind turbine wakes. NLEVMs can model anisotropic turbulence through a nonlinear stress-strain relation, and they can improve the velocity deficit by the use of a variable eddy viscosity coefficient, that delays the wake recovery. Unfortunately, all tested NLEVMs show...... numerically unstable behavior for fine grids, which inhibits a grid dependency study for numerical verification. Therefore, a simpler EVM is proposed, labeled as the k-ε - fp EVM, that has a linear stress-strain relation, but still has a variable eddy viscosity coefficient. The k-ε - fp EVM is numerically...

  16. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  17. Modelling, controlling, predicting blackouts

    CERN Document Server

    Wang, Chengwei; Baptista, Murilo S

    2016-01-01

    The electric power system is one of the cornerstones of modern society. One of its most serious malfunctions is the blackout, a catastrophic event that may disrupt a substantial portion of the system, playing havoc to human life and causing great economic losses. Thus, understanding the mechanisms leading to blackouts and creating a reliable and resilient power grid has been a major issue, attracting the attention of scientists, engineers and stakeholders. In this paper, we study the blackout problem in power grids by considering a practical phase-oscillator model. This model allows one to simultaneously consider different types of power sources (e.g., traditional AC power plants and renewable power sources connected by DC/AC inverters) and different types of loads (e.g., consumers connected to distribution networks and consumers directly connected to power plants). We propose two new control strategies based on our model, one for traditional power grids, and another one for smart grids. The control strategie...

  18. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Evaluation of the efficiency of artificial neural networks for genetic value prediction.

    Science.gov (United States)

    Silva, G N; Tomaz, R S; Sant'Anna, I C; Carneiro, V Q; Cruz, C D; Nascimento, M

    2016-03-28

    Artificial neural networks have shown great potential when applied to breeding programs. In this study, we propose the use of artificial neural networks as a viable alternative to conventional prediction methods. We conduct a thorough evaluation of the efficiency of these networks with respect to the prediction of breeding values. Therefore, we considered eight simulated scenarios, and for the purpose of genetic value prediction, seven statistical parameters in addition to the phenotypic mean in a network designed as a multilayer perceptron. After an evaluation of different network configurations, the results demonstrated the superiority of neural networks compared to estimation procedures based on linear models, and indicated high predictive accuracy and network efficiency.

  20. PREDICTION OF GROSS FEED EFFICIENCY IN ITALIAN HOLSTEIN FRIESIAN BULLS

    Directory of Open Access Journals (Sweden)

    Raffaella Finocchiaro

    2015-09-01

    Full Text Available The aim of this study was to predict gross feed efficiency of Italian Holstein Friesian bulls selected for production, functional and type traits. A total of 12,238 bulls, from the April 2015 genetic evaluation, were used. Predicted daily gross feed efficiency (pFE was obtained as ratio between milk yield (MY and predicted dry matter intake (pDMI. Phenotypic trend for MY, predicted body weight (pBW and pFE were calculated by the bull birth year. The results suggest that pFE can be successfully selected to increase profitability of dairy cattle using the current milk recording system. Direct measurements on DMI should be considered to confirm results of pFE obtained in the present study.

  1. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production......The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... and HIRLAM predictions. The statistical models belong to the class of conditional parametric models. The models are estimated using local polynomial regression, but the estimation method is here extended to be adaptive in order to allow for slow changes in the system e.g. caused by the annual variations...

  2. DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail

    2016-03-16

    Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational methods for the accurate prediction of potential DTIs. To-date, many computational methods have been proposed for this purpose, but they suffer the drawback of a high rate of false positive predictions. Results Here, we developed a novel computational DTI prediction method, DASPfind. DASPfind uses simple paths of particular lengths inferred from a graph that describes DTIs, similarities between drugs, and similarities between the protein targets of drugs. We show that on average, over the four gold standard DTI datasets, DASPfind significantly outperforms other existing methods when the single top-ranked predictions are considered, resulting in 46.17 % of these predictions being correct, and it achieves 49.22 % correct single top ranked predictions when the set of all DTIs for a single drug is tested. Furthermore, we demonstrate that our method is best suited for predicting DTIs in cases of drugs with no known targets or with few known targets. We also show the practical use of DASPfind by generating novel predictions for the Ion Channel dataset and validating them manually. Conclusions DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery. DASPfind

  3. Modelling water uptake efficiency of root systems

    Science.gov (United States)

    Leitner, Daniel; Tron, Stefania; Schröder, Natalie; Bodner, Gernot; Javaux, Mathieu; Vanderborght, Jan; Vereecken, Harry; Schnepf, Andrea

    2016-04-01

    Water uptake is crucial for plant productivity. Trait based breeding for more water efficient crops will enable a sustainable agricultural management under specific pedoclimatic conditions, and can increase drought resistance of plants. Mathematical modelling can be used to find suitable root system traits for better water uptake efficiency defined as amount of water taken up per unit of root biomass. This approach requires large simulation times and large number of simulation runs, since we test different root systems under different pedoclimatic conditions. In this work, we model water movement by the 1-dimensional Richards equation with the soil hydraulic properties described according to the van Genuchten model. Climatic conditions serve as the upper boundary condition. The root system grows during the simulation period and water uptake is calculated via a sink term (after Tron et al. 2015). The goal of this work is to compare different free software tools based on different numerical schemes to solve the model. We compare implementations using DUMUX (based on finite volumes), Hydrus 1D (based on finite elements), and a Matlab implementation of Van Dam, J. C., & Feddes 2000 (based on finite differences). We analyse the methods for accuracy, speed and flexibility. Using this model case study, we can clearly show the impact of various root system traits on water uptake efficiency. Furthermore, we can quantify frequent simplifications that are introduced in the modelling step like considering a static root system instead of a growing one, or considering a sink term based on root density instead of considering the full root hydraulic model (Javaux et al. 2008). References Tron, S., Bodner, G., Laio, F., Ridolfi, L., & Leitner, D. (2015). Can diversity in root architecture explain plant water use efficiency? A modeling study. Ecological modelling, 312, 200-210. Van Dam, J. C., & Feddes, R. A. (2000). Numerical simulation of infiltration, evaporation and shallow

  4. Computational Efficient Upscaling Methodology for Predicting Thermal Conductivity of Nuclear Waste forms

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.

    2011-09-28

    This study evaluated different upscaling methods to predict thermal conductivity in loaded nuclear waste form, a heterogeneous material system. The efficiency and accuracy of these methods were compared. Thermal conductivity in loaded nuclear waste form is an important property specific to scientific researchers, in waste form Integrated performance and safety code (IPSC). The effective thermal conductivity obtained from microstructure information and local thermal conductivity of different components is critical in predicting the life and performance of waste form during storage. How the heat generated during storage is directly related to thermal conductivity, which in turn determining the mechanical deformation behavior, corrosion resistance and aging performance. Several methods, including the Taylor model, Sachs model, self-consistent model, and statistical upscaling models were developed and implemented. Due to the absence of experimental data, prediction results from finite element method (FEM) were used as reference to determine the accuracy of different upscaling models. Micrographs from different loading of nuclear waste were used in the prediction of thermal conductivity. Prediction results demonstrated that in term of efficiency, boundary models (Taylor and Sachs model) are better than self consistent model, statistical upscaling method and FEM. Balancing the computation resource and accuracy, statistical upscaling is a computational efficient method in predicting effective thermal conductivity for nuclear waste form.

  5. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...

  6. Urban eco-efficiency and system dynamics modelling

    Energy Technology Data Exchange (ETDEWEB)

    Hradil, P., Email: petr.hradil@vtt.fi

    2012-06-15

    Assessment of urban development is generally based on static models of economic, social or environmental impacts. More advanced dynamic models have been used mostly for prediction of population and employment changes as well as for other macro-economic issues. This feasibility study was arranged to test the potential of system dynamic modelling in assessing eco-efficiency changes during urban development. (orig.)

  7. Predicting the efficiency of deposit removal during filter backwash

    African Journals Online (AJOL)

    The long-term performance of granular media filters used in drinking water ... Keywords: water treatment, sand filters, fluidised backwash, backwash efficiency, backwash modelling, ...... Filter Maintenance and Operations Guidance Manual.

  8. Specialization does not predict individual efficiency in an ant.

    Directory of Open Access Journals (Sweden)

    Anna Dornhaus

    2008-11-01

    Full Text Available The ecological success of social insects is often attributed to an increase in efficiency achieved through division of labor between workers in a colony. Much research has therefore focused on the mechanism by which a division of labor is implemented, i.e., on how tasks are allocated to workers. However, the important assumption that specialists are indeed more efficient at their work than generalist individuals--the "Jack-of-all-trades is master of none" hypothesis--has rarely been tested. Here, I quantify worker efficiency, measured as work completed per time, in four different tasks in the ant Temnothorax albipennis: honey and protein foraging, collection of nest-building material, and brood transports in a colony emigration. I show that individual efficiency is not predicted by how specialized workers were on the respective task. Worker efficiency is also not consistently predicted by that worker's overall activity or delay to begin the task. Even when only the worker's rank relative to nestmates in the same colony was used, specialization did not predict efficiency in three out of the four tasks, and more specialized workers actually performed worse than others in the fourth task (collection of sand grains. I also show that the above relationships, as well as median individual efficiency, do not change with colony size. My results demonstrate that in an ant species without morphologically differentiated worker castes, workers may nevertheless differ in their ability to perform different tasks. Surprisingly, this variation is not utilized by the colony--worker allocation to tasks is unrelated to their ability to perform them. What, then, are the adaptive benefits of behavioral specialization, and why do workers choose tasks without regard for whether they can perform them well? We are still far from an understanding of the adaptive benefits of division of labor in social insects.

  9. Predictive models of forest dynamics.

    Science.gov (United States)

    Purves, Drew; Pacala, Stephen

    2008-06-13

    Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.

  10. Predicting Recovery Potential for Individual Stroke Patients Increases Rehabilitation Efficiency.

    Science.gov (United States)

    Stinear, Cathy M; Byblow, Winston D; Ackerley, Suzanne J; Barber, P Alan; Smith, Marie-Claire

    2017-04-01

    Several clinical measures and biomarkers are associated with motor recovery after stroke, but none are used to guide rehabilitation for individual patients. The objective of this study was to evaluate the implementation of upper limb predictions in stroke rehabilitation, by combining clinical measures and biomarkers using the Predict Recovery Potential (PREP) algorithm. Predictions were provided for patients in the implementation group (n=110) and withheld from the comparison group (n=82). Predictions guided rehabilitation therapy focus for patients in the implementation group. The effects of predictive information on clinical practice (length of stay, therapist confidence, therapy content, and dose) were evaluated. Clinical outcomes (upper limb function, impairment and use, independence, and quality of life) were measured 3 and 6 months poststroke. The primary clinical practice outcome was inpatient length of stay. The primary clinical outcome was Action Research Arm Test score 3 months poststroke. Length of stay was 1 week shorter for the implementation group (11 days; 95% confidence interval, 9-13 days) than the comparison group (17 days; 95% confidence interval, 14-21 days; P=0.001), controlling for upper limb impairment, age, sex, and comorbidities. Therapists were more confident (P=0.004) and modified therapy content according to predictions for the implementation group (Prehabilitation efficiency after stroke without compromising clinical outcome. URL: http://anzctr.org.au. Unique identifier: ACTRN12611000755932. © 2017 American Heart Association, Inc.

  11. Calibrated predictions for multivariate competing risks models.

    Science.gov (United States)

    Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni

    2014-04-01

    Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.

  12. Model-based control of fuel cells (2): Optimal efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Golbert, Joshua; Lewin, Daniel R. [PSE Research Group, Wolfson Department of Chemical Engineering, Technion IIT, Haifa 32000 (Israel)

    2007-11-08

    A dynamic PEM fuel cell model has been developed, taking into account spatial dependencies of voltage, current, material flows, and temperatures. The voltage, current, and therefore, the efficiency are dependent on the temperature and other variables, which can be optimized on the fly to achieve optimal efficiency. In this paper, we demonstrate that a model predictive controller, relying on a reduced-order approximation of the dynamic PEM fuel cell model can satisfy setpoint changes in the power demand, while at the same time, minimize fuel consumption to maximize the efficiency. The main conclusion of the paper is that by appropriate formulation of the objective function, reliable optimization of the performance of a PEM fuel cell can be performed in which the main tunable parameter is the prediction and control horizons, V and U, respectively. We have demonstrated that increased fuel efficiency can be obtained at the expense of slower responses, by increasing the values of these parameters. (author)

  13. Efficient Model-Based Exploration

    NARCIS (Netherlands)

    Wiering, M.A.; Schmidhuber, J.

    1998-01-01

    Model-Based Reinforcement Learning (MBRL) can greatly profit from using world models for estimating the consequences of selecting particular actions: an animat can construct such a model from its experiences and use it for computing rewarding behavior. We study the problem of collecting useful exper

  14. Energy-Efficient Integration of Continuous Context Sensing and Prediction into Smartwatches

    Directory of Open Access Journals (Sweden)

    Reza Rawassizadeh

    2015-09-01

    Full Text Available As the availability and use of wearables increases, they are becoming a promising platform for context sensing and context analysis. Smartwatches are a particularly interesting platform for this purpose, as they offer salient advantages, such as their proximity to the human body. However, they also have limitations associated with their small form factor, such as processing power and battery life, which makes it difficult to simply transfer smartphone-based context sensing and prediction models to smartwatches. In this paper, we introduce an energy-efficient, generic, integrated framework for continuous context sensing and prediction on smartwatches. Our work extends previous approaches for context sensing and prediction on wrist-mounted wearables that perform predictive analytics outside the device. We offer a generic sensing module and a novel energy-efficient, on-device prediction module that is based on a semantic abstraction approach to convert sensor data into meaningful information objects, similar to human perception of a behavior. Through six evaluations, we analyze the energy efficiency of our framework modules, identify the optimal file structure for data access and demonstrate an increase in accuracy of prediction through our semantic abstraction method. The proposed framework is hardware independent and can serve as a reference model for implementing context sensing and prediction on small wearable devices beyond smartwatches, such as body-mounted cameras.

  15. Energy-Efficient Integration of Continuous Context Sensing and Prediction into Smartwatches.

    Science.gov (United States)

    Rawassizadeh, Reza; Tomitsch, Martin; Nourizadeh, Manouchehr; Momeni, Elaheh; Peery, Aaron; Ulanova, Liudmila; Pazzani, Michael

    2015-09-08

    As the availability and use of wearables increases, they are becoming a promising platform for context sensing and context analysis. Smartwatches are a particularly interesting platform for this purpose, as they offer salient advantages, such as their proximity to the human body. However, they also have limitations associated with their small form factor, such as processing power and battery life, which makes it difficult to simply transfer smartphone-based context sensing and prediction models to smartwatches. In this paper, we introduce an energy-efficient, generic, integrated framework for continuous context sensing and prediction on smartwatches. Our work extends previous approaches for context sensing and prediction on wrist-mounted wearables that perform predictive analytics outside the device. We offer a generic sensing module and a novel energy-efficient, on-device prediction module that is based on a semantic abstraction approach to convert sensor data into meaningful information objects, similar to human perception of a behavior. Through six evaluations, we analyze the energy efficiency of our framework modules, identify the optimal file structure for data access and demonstrate an increase in accuracy of prediction through our semantic abstraction method. The proposed framework is hardware independent and can serve as a reference model for implementing context sensing and prediction on small wearable devices beyond smartwatches, such as body-mounted cameras.

  16. An Efficient Approach for Real-Time Prediction of Rate of Penetration in Offshore Drilling

    Directory of Open Access Journals (Sweden)

    Xian Shi

    2016-01-01

    Full Text Available Predicting the rate of penetration (ROP is critical for drilling optimization because maximization of ROP can greatly reduce expensive drilling costs. In this work, the typical extreme learning machine (ELM and an efficient learning model, upper-layer-solution-aware (USA, have been used in ROP prediction. Because formation type, rock mechanical properties, hydraulics, bit type and properties (weight on the bit and rotary speed, and mud properties are the most important parameters that affect ROP, they have been considered to be the input parameters to predict ROP. The prediction model has been constructed using industrial reservoir data sets that are collected from an oil reservoir at the Bohai Bay, China. The prediction accuracy of the model has been evaluated and compared with the commonly used conventional artificial neural network (ANN. The results indicate that ANN, ELM, and USA models are all competent for ROP prediction, while both of the ELM and USA models have the advantage of faster learning speed and better generalization performance. The simulation results have shown a promising prospect for ELM and USA in the field of ROP prediction in new oil and gas exploration in general, as they outperform the ANN model. Meanwhile, this work provides drilling engineers with more choices for ROP prediction according to their computation and accuracy demand.

  17. Efficient Computational Model of Hysteresis

    Science.gov (United States)

    Shields, Joel

    2005-01-01

    A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.

  18. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  19. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  20. Explicit model predictive control accuracy analysis

    OpenAIRE

    Knyazev, Andrew; Zhu, Peizhen; Di Cairano, Stefano

    2015-01-01

    Model Predictive Control (MPC) can efficiently control constrained systems in real-time applications. MPC feedback law for a linear system with linear inequality constraints can be explicitly computed off-line, which results in an off-line partition of the state space into non-overlapped convex regions, with affine control laws associated to each region of the partition. An actual implementation of this explicit MPC in low cost micro-controllers requires the data to be "quantized", i.e. repre...

  1. An efficient link prediction index for complex military organization

    Science.gov (United States)

    Fan, Changjun; Liu, Zhong; Lu, Xin; Xiu, Baoxin; Chen, Qing

    2017-03-01

    Quality of information is crucial for decision-makers to judge the battlefield situations and design the best operation plans, however, real intelligence data are often incomplete and noisy, where missing links prediction methods and spurious links identification algorithms can be applied, if modeling the complex military organization as the complex network where nodes represent functional units and edges denote communication links. Traditional link prediction methods usually work well on homogeneous networks, but few for the heterogeneous ones. And the military network is a typical heterogeneous network, where there are different types of nodes and edges. In this paper, we proposed a combined link prediction index considering both the nodes' types effects and nodes' structural similarities, and demonstrated that it is remarkably superior to all the 25 existing similarity-based methods both in predicting missing links and identifying spurious links in a real military network data; we also investigated the algorithms' robustness under noisy environment, and found the mistaken information is more misleading than incomplete information in military areas, which is different from that in recommendation systems, and our method maintained the best performance under the condition of small noise. Since the real military network intelligence must be carefully checked at first due to its significance, and link prediction methods are just adopted to purify the network with the left latent noise, the method proposed here is applicable in real situations. In the end, as the FINC-E model, here used to describe the complex military organizations, is also suitable to many other social organizations, such as criminal networks, business organizations, etc., thus our method has its prospects in these areas for many tasks, like detecting the underground relationships between terrorists, predicting the potential business markets for decision-makers, and so on.

  2. A Traffic Prediction Algorithm for Street Lighting Control Efficiency

    Directory of Open Access Journals (Sweden)

    POPA Valentin

    2013-01-01

    Full Text Available This paper presents the development of a traffic prediction algorithm that can be integrated in a street lighting monitoring and control system. The prediction algorithm must enable the reduction of energy costs and improve energy efficiency by decreasing the light intensity depending on the traffic level. The algorithm analyses and processes the information received at the command center based on the traffic level at different moments. The data is collected by means of the Doppler vehicle detection sensors integrated within the system. Thus, two methods are used for the implementation of the algorithm: a neural network and a k-NN (k-Nearest Neighbor prediction algorithm. For 500 training cycles, the mean square error of the neural network is 9.766 and for 500.000 training cycles the error amounts to 0.877. In case of the k-NN algorithm the error increases from 8.24 for k=5 to 12.27 for a number of 50 neighbors. In terms of a root means square error parameter, the use of a neural network ensures the highest performance level and can be integrated in a street lighting control system.

  3. Efficient Estimation in Heteroscedastic Varying Coefficient Models

    Directory of Open Access Journals (Sweden)

    Chuanhua Wei

    2015-07-01

    Full Text Available This paper considers statistical inference for the heteroscedastic varying coefficient model. We propose an efficient estimator for coefficient functions that is more efficient than the conventional local-linear estimator. We establish asymptotic normality for the proposed estimator and conduct some simulation to illustrate the performance of the proposed method.

  4. Modeling Fuel Efficiency: MPG or GPHM?

    Science.gov (United States)

    Bartkovich, Kevin G.

    2013-01-01

    The standard for measuring fuel efficiency in the U.S. has been miles per gallon (mpg). However, the Environmental Protection Agency's (EPA) switch in rating fuel efficiency from miles per gallon to gallons per hundred miles with the 2013 model-year cars leads to interesting and relevant mathematics with real-world connections. By modeling…

  5. Prediction of daily sea surface temperature using efficient neural networks

    Science.gov (United States)

    Patil, Kalpesh; Deo, Makaranad Chintamani

    2017-04-01

    Short-term prediction of sea surface temperature (SST) is commonly achieved through numerical models. Numerical approaches are more suitable for use over a large spatial domain than in a specific site because of the difficulties involved in resolving various physical sub-processes at local levels. Therefore, for a given location, a data-driven approach such as neural networks may provide a better alternative. The application of neural networks, however, needs a large experimentation in their architecture, training methods, and formation of appropriate input-output pairs. A network trained in this manner can provide more attractive results if the advances in network architecture are additionally considered. With this in mind, we propose the use of wavelet neural networks (WNNs) for prediction of daily SST values. The prediction of daily SST values was carried out using WNN over 5 days into the future at six different locations in the Indian Ocean. First, the accuracy of site-specific SST values predicted by a numerical model, ROMS, was assessed against the in situ records. The result pointed out the necessity for alternative approaches. First, traditional networks were tried and after noticing their poor performance, WNN was used. This approach produced attractive forecasts when judged through various error statistics. When all locations were viewed together, the mean absolute error was within 0.18 to 0.32 °C for a 5-day-ahead forecast. The WNN approach was thus found to add value to the numerical method of SST prediction when location-specific information is desired.

  6. Prediction of daily sea surface temperature using efficient neural networks

    Science.gov (United States)

    Patil, Kalpesh; Deo, Makaranad Chintamani

    2017-02-01

    Short-term prediction of sea surface temperature (SST) is commonly achieved through numerical models. Numerical approaches are more suitable for use over a large spatial domain than in a specific site because of the difficulties involved in resolving various physical sub-processes at local levels. Therefore, for a given location, a data-driven approach such as neural networks may provide a better alternative. The application of neural networks, however, needs a large experimentation in their architecture, training methods, and formation of appropriate input-output pairs. A network trained in this manner can provide more attractive results if the advances in network architecture are additionally considered. With this in mind, we propose the use of wavelet neural networks (WNNs) for prediction of daily SST values. The prediction of daily SST values was carried out using WNN over 5 days into the future at six different locations in the Indian Ocean. First, the accuracy of site-specific SST values predicted by a numerical model, ROMS, was assessed against the in situ records. The result pointed out the necessity for alternative approaches. First, traditional networks were tried and after noticing their poor performance, WNN was used. This approach produced attractive forecasts when judged through various error statistics. When all locations were viewed together, the mean absolute error was within 0.18 to 0.32 °C for a 5-day-ahead forecast. The WNN approach was thus found to add value to the numerical method of SST prediction when location-specific information is desired.

  7. Prediction of Protein Thermostability by an Efficient Neural Network Approach

    Directory of Open Access Journals (Sweden)

    Jalal Rezaeenour

    2016-10-01

    significantly improves the accuracy of ELM in prediction of thermostable enzymes. ELM tends to require more neurons in the hidden-layer than conventional tuning-based learning algorithms. To overcome these, the proposed approach uses a GA which optimizes the structure and the parameters of the ELM. In summary, optimization of ELM with GA results in an efficient prediction method; numerical experiments proved that our approach yields excellent results.

  8. PREDICT : model for prediction of survival in localized prostate cancer

    NARCIS (Netherlands)

    Kerkmeijer, Linda G W; Monninkhof, Evelyn M.; van Oort, Inge M.; van der Poel, Henk G.; de Meerleer, Gert; van Vulpen, Marco

    2016-01-01

    Purpose: Current models for prediction of prostate cancer-specific survival do not incorporate all present-day interventions. In the present study, a pre-treatment prediction model for patients with localized prostate cancer was developed.Methods: From 1989 to 2008, 3383 patients were treated with I

  9. Predictive Modeling of Cardiac Ischemia

    Science.gov (United States)

    Anderson, Gary T.

    1996-01-01

    The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.

  10. Modelling fluidized catalytic cracking unit stripper efficiency

    Directory of Open Access Journals (Sweden)

    García-Dopico M.

    2015-01-01

    Full Text Available This paper presents our modelling of a FCCU stripper, following our earlier research. This model can measure stripper efficiency against the most important variables: pressure, temperature, residence time and steam flow. Few models in the literature model the stripper and usually they do against only one variable. Nevertheless, there is general agreement on the importance of the stripper in the overall process, and the fact that there are few models maybe it is due to the difficulty to obtain a comprehensive model. On the other hand, the proposed model does use all the variables of the stripper, calculating efficiency on the basis of steam flow, pressure, residence time and temperature. The correctness of the model is then analysed, and we examine several possible scenarios, like decreasing the steam flow, which is achieved by increasing the temperature in the stripper.

  11. On the Performance of the Predicted Energy Efficient Bee-Inspired Routing (PEEBR

    Directory of Open Access Journals (Sweden)

    Imane M. A. Fahmy

    2014-05-01

    Full Text Available The Predictive Energy Efficient Bee Routing PEEBR is a swarm intelligent reactive routing algorithm inspired from the bees food search behavior. PEEBR aims to optimize path selection in the Mobile Ad-hoc Network MANET based on energy consumption prediction. It uses Artificial Bees Colony ABC Optimization model and two types of bee agents: The scout bee for exploration phase and the forager bee for evaluation and exploitation phases. PEEBR considers the predicted mobile nodes battery residual power and the expected energy consumption for packet reception and relaying of these nodes along each of the potential routing paths between a source and destination pair. In this research paper, the performance of the proposed and improved PEEBR algorithm is evaluated in terms of energy consumption efficiency and throughput compared to two state-of-art ad-hoc routing protocols: The Ad-hoc On-demand Distance Vector AODV and the Destination Sequenced Distance Vector DSDV for various MANET sizes.

  12. Prediction of Genomic Breeding Values for feed efficiency and related traits in pigs

    DEFF Research Database (Denmark)

    Do, Duy Ngoc; Janss, Luc; Strathe, Anders Bjerring

    Improvement of feed efficiency is essential in pig breeding and selection for reduced residual feed intake (RFI) is an option. Accuracy of genomic prediction (GP) relies on assumptions of genetic architecture of the traits. This study applied five different Bayesian Power LASSO (BPL) models...... with different power parameters to investigate genetic architecture of RFI, to predict genomic breeding values, and to partition genetic variances for different SNP groups. Data were 1272 Duroc pigs with both genotypic and phenotypic records for RFI as well as daily feed intake (DFI). The gene mapping confirmed...... and indicates their potentials for genomic prediction. Further work includes applying other GP methods for RFI and DFI as well as extending these methods to feed efficiency related traits such as feeding behaviour and body composition traits....

  13. Numerical weather prediction model tuning via ensemble prediction system

    Science.gov (United States)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  14. Predicting Efficient Antenna Ligands for Tb(III) Emission

    Energy Technology Data Exchange (ETDEWEB)

    Samuel, Amanda P.S.; Xu, Jide; Raymond, Kenneth

    2008-10-06

    A series of highly luminescent Tb(III) complexes of para-substituted 2-hydroxyisophthalamide ligands (5LI-IAM-X) has been prepared (X = H, CH{sub 3}, (C=O)NHCH{sub 3}, SO{sub 3}{sup -}, NO{sub 2}, OCH{sub 3}, F, Cl, Br) to probe the effect of substituting the isophthalamide ring on ligand and Tb(III) emission in order to establish a method for predicting the effects of chromophore modification on Tb(III) luminescence. The energies of the ligand singlet and triplet excited states are found to increase linearly with the {pi}-withdrawing ability of the substituent. The experimental results are supported by time-dependent density functional theory (TD-DFT) calculations performed on model systems, which predict ligand singlet and triplet energies within {approx}5% of the experimental values. The quantum yield ({Phi}) values of the Tb(III) complex increases with the triplet energy of the ligand, which is in part due to the decreased non-radiative deactivation caused by thermal repopulation of the triplet. Together, the experimental and theoretical results serve as a predictive tool that can be used to guide the synthesis of ligands used to sensitize lanthanide luminescence.

  15. Efficient Modelling and Generation of Markov Automata

    NARCIS (Netherlands)

    Timmer, Mark; Katoen, Joost-Pieter; Pol, van de Jaco; Stoelinga, Mariëlle; Koutny, M.; Ulidowski, I.

    2012-01-01

    This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the M

  16. Statistical modelling for ship propulsion efficiency

    DEFF Research Database (Denmark)

    Petersen, Jóan Petur; Jacobsen, Daniel J.; Winther, Ole

    2012-01-01

    This paper presents a state-of-the-art systems approach to statistical modelling of fuel efficiency in ship propulsion, and also a novel and publicly available data set of high quality sensory data. Two statistical model approaches are investigated and compared: artificial neural networks...

  17. Efficient modeling of vector hysteresis using fuzzy inference systems

    Energy Technology Data Exchange (ETDEWEB)

    Adly, A.A. [Electrical Power and Machines Department, Faculty of Engineering, Cairo University, Giza 12211 (Egypt)], E-mail: adlyamr@gmail.com; Abd-El-Hafiz, S.K. [Engineering Mathematics Department, Faculty of Engineering, Cairo University, Giza 12211 (Egypt)], E-mail: sabdelhafiz@gmail.com

    2008-10-01

    Vector hysteresis models have always been regarded as important tools to determine which multi-dimensional magnetic field-media interactions may be predicted. In the past, considerable efforts have been focused on mathematical modeling methodologies of vector hysteresis. This paper presents an efficient approach based upon fuzzy inference systems for modeling vector hysteresis. Computational efficiency of the proposed approach stems from the fact that the basic non-local memory Preisach-type hysteresis model is approximated by a local memory model. The proposed computational low-cost methodology can be easily integrated in field calculation packages involving massive multi-dimensional discretizations. Details of the modeling methodology and its experimental testing are presented.

  18. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...

  19. Predictive Model Assessment for Count Data

    Science.gov (United States)

    2007-09-05

    critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002

  20. Efficient numerical integrators for stochastic models

    CERN Document Server

    De Fabritiis, G; Español, P; Coveney, P V

    2006-01-01

    The efficient simulation of models defined in terms of stochastic differential equations (SDEs) depends critically on an efficient integration scheme. In this article, we investigate under which conditions the integration schemes for general SDEs can be derived using the Trotter expansion. It follows that, in the stochastic case, some care is required in splitting the stochastic generator. We test the Trotter integrators on an energy-conserving Brownian model and derive a new numerical scheme for dissipative particle dynamics. We find that the stochastic Trotter scheme provides a mathematically correct and easy-to-use method which should find wide applicability.

  1. Predictive modeling of low solubility semiconductor alloys

    Science.gov (United States)

    Rodriguez, Garrett V.; Millunchick, Joanna M.

    2016-09-01

    GaAsBi is of great interest for applications in high efficiency optoelectronic devices due to its highly tunable bandgap. However, the experimental growth of high Bi content films has proven difficult. Here, we model GaAsBi film growth using a kinetic Monte Carlo simulation that explicitly takes cation and anion reactions into account. The unique behavior of Bi droplets is explored, and a sharp decrease in Bi content upon Bi droplet formation is demonstrated. The high mobility of simulated Bi droplets on GaAsBi surfaces is shown to produce phase separated Ga-Bi droplets as well as depressions on the film surface. A phase diagram for a range of growth rates that predicts both Bi content and droplet formation is presented to guide the experimental growth of high Bi content GaAsBi films.

  2. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  3. Nonlinear chaotic model for predicting storm surges

    NARCIS (Netherlands)

    Siek, M.; Solomatine, D.P.

    This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables.

  4. Multi-prediction particle filter for efficient parallelized implementation

    Directory of Open Access Journals (Sweden)

    Chu Chun-Yuan

    2011-01-01

    Full Text Available Abstract Particle filter (PF is an emerging signal processing methodology, which can effectively deal with nonlinear and non-Gaussian signals by a sample-based approximation of the state probability density function. The particle generation of the PF is a data-independent procedure and can be implemented in parallel. However, the resampling procedure in the PF is a sequential task in natural and difficult to be parallelized. Based on the Amdahl's law, the sequential portion of a task limits the maximum speed-up of the parallelized implementation. Moreover, large particle number is usually required to obtain an accurate estimation, and the complexity of the resampling procedure is highly related to the number of particles. In this article, we propose a multi-prediction (MP framework with two selection approaches. The proposed MP framework can reduce the required particle number for target estimation accuracy, and the sequential operation of the resampling can be reduced. Besides, the overhead of the MP framework can be easily compensated by parallel implementation. The proposed MP-PF alleviates the global sequential operation by increasing the local parallel computation. In addition, the MP-PF is very suitable for multi-core graphics processing unit (GPU platform, which is a popular parallel processing architecture. We give prototypical implementations of the MP-PFs on multi-core GPU platform. For the classic bearing-only tracking experiments, the proposed MP-PF can be 25.1 and 15.3 times faster than the sequential importance resampling-PF with 10,000 and 20,000 particles, respectively. Hence, the proposed MP-PF can enhance the efficiency of the parallelization.

  5. How to Establish Clinical Prediction Models

    Directory of Open Access Journals (Sweden)

    Yong-ho Lee

    2016-03-01

    Full Text Available A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice.

  6. Efficient sampling in fragment-based protein structure prediction using an estimation of distribution algorithm.

    Directory of Open Access Journals (Sweden)

    David Simoncini

    Full Text Available Fragment assembly is a powerful method of protein structure prediction that builds protein models from a pool of candidate fragments taken from known structures. Stochastic sampling is subsequently used to refine the models. The structures are first represented as coarse-grained models and then as all-atom models for computational efficiency. Many models have to be generated independently due to the stochastic nature of the sampling methods used to search for the global minimum in a complex energy landscape. In this paper we present EdaFold(AA, a fragment-based approach which shares information between the generated models and steers the search towards native-like regions. A distribution over fragments is estimated from a pool of low energy all-atom models. This iteratively-refined distribution is used to guide the selection of fragments during the building of models for subsequent rounds of structure prediction. The use of an estimation of distribution algorithm enabled EdaFold(AA to reach lower energy levels and to generate a higher percentage of near-native models. [Formula: see text] uses an all-atom energy function and produces models with atomic resolution. We observed an improvement in energy-driven blind selection of models on a benchmark of EdaFold(AA in comparison with the [Formula: see text] AbInitioRelax protocol.

  7. MULTI MODEL DATA MINING APPROACH FOR HEART FAILURE PREDICTION

    Directory of Open Access Journals (Sweden)

    Priyanka H U

    2016-09-01

    Full Text Available Developing predictive modelling solutions for risk estimation is extremely challenging in health-care informatics. Risk estimation involves integration of heterogeneous clinical sources having different representation from different health-care provider making the task increasingly complex. Such sources are typically voluminous, diverse, and significantly change over the time. Therefore, distributed and parallel computing tools collectively termed big data tools are in need which can synthesize and assist the physician to make right clinical decisions. In this work we propose multi-model predictive architecture, a novel approach for combining the predictive ability of multiple models for better prediction accuracy. We demonstrate the effectiveness and efficiency of the proposed work on data from Framingham Heart study. Results show that the proposed multi-model predictive architecture is able to provide better accuracy than best model approach. By modelling the error of predictive models we are able to choose sub set of models which yields accurate results. More information was modelled into system by multi-level mining which has resulted in enhanced predictive accuracy.

  8. Internal quantum efficiency modeling of silicon photodiodes.

    Science.gov (United States)

    Gentile, T R; Brown, S W; Lykke, K R; Shaw, P S; Woodward, J T

    2010-04-01

    Results are presented for modeling of the shape of the internal quantum efficiency (IQE) versus wavelength for silicon photodiodes in the 400 nm to 900 nm wavelength range. The IQE data are based on measurements of the external quantum efficiencies of three transmission optical trap detectors using an extensive set of laser wavelengths, along with the transmittance of the traps. We find that a simplified version of a previously reported IQE model fits the data with an accuracy of better than 0.01%. These results provide an important validation of the National Institute of Standards and Technology (NIST) spectral radiant power responsivity scale disseminated through the NIST Spectral Comparator Facility, as well as those scales disseminated by other National Metrology Institutes who have employed the same model.

  9. Model Predictive Control of a Wave Energy Converter

    DEFF Research Database (Denmark)

    Andersen, Palle; Pedersen, Tom Søndergård; Nielsen, Kirsten Mølgaard;

    2015-01-01

    In this paper reactive control and Model Predictive Control (MPC) for a Wave Energy Converter (WEC) are compared. The analysis is based on a WEC from Wave Star A/S designed as a point absorber. The model predictive controller uses wave models based on the dominating sea states combined with a model...... connecting undisturbed wave sequences to sequences of torque. Losses in the conversion from mechanical to electrical power are taken into account in two ways. Conventional reactive controllers are tuned for each sea state with the assumption that the converter has the same efficiency back and forth. MPC...

  10. ACO model should encourage efficient care delivery.

    Science.gov (United States)

    Toussaint, John; Krueger, David; Shortell, Stephen M; Milstein, Arnold; Cutler, David M

    2015-09-01

    The independent Office of the Actuary for CMS certified that the Pioneer ACO model has met the stringent criteria for expansion to a larger population. Significant savings have accrued and quality targets have been met, so the program as a whole appears to be working. Ironically, 13 of the initial 32 enrollees have left. We attribute this to the design of the ACO models which inadequately support efficient care delivery. Using Bellin-ThedaCare Healthcare Partners as an example, we will focus on correctible flaws in four core elements of the ACO payment model: finance spending and targets, attribution, and quality performance.

  11. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....

  12. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Science.gov (United States)

    Simidjievski, Nikola; Todorovski, Ljupčo; Džeroski, Sašo

    2016-01-01

    Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting), significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  13. A Stochastic Nonlinear Water Wave Model for Efficient Uncertainty Quantification

    CERN Document Server

    Bigoni, Daniele; Eskilsson, Claes

    2014-01-01

    A major challenge in next-generation industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a stochastic formulation of a fully nonlinear and dispersive potential flow water wave model for the probabilistic description of the evolution waves. This model is discretized using the Stochastic Collocation Method (SCM), which provides an approximate surrogate of the model. This can be used to accurately and efficiently estimate the probability distribution of the unknown time dependent stochastic solution after the forward propagation of uncertainties. We revisit experimental benchmarks often used for validation of deterministic water wave models. We do this using a fully nonlinear and dispersive model and show how uncertainty in the model input can influence the model output. Based on numerical experiments and assumed uncertainties in boundary data, our analysis reveals that some of the known discrepancies from deterministic simulation in compa...

  14. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  15. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing p

  16. Childhood asthma prediction models: a systematic review.

    Science.gov (United States)

    Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup

    2015-12-01

    Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.

  17. Real-Time Optimization for Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Edlund, Kristian; Frison, Gianluca

    2012-01-01

    In this paper, we develop an efficient homogeneous and self-dual interior-point method for the linear programs arising in economic model predictive control. To exploit structure in the optimization problems, the algorithm employs a highly specialized Riccati iteration procedure. Simulations show...

  18. A Composite Model Predictive Control Strategy for Furnaces

    Institute of Scientific and Technical Information of China (English)

    Hao Zang; Hongguang Li; Jingwen Huang; Jia Wang

    2014-01-01

    Tube furnaces are essential and primary energy intensive facilities in petrochemical plants. Operational optimi-zation of furnaces could not only help to improve product quality but also benefit to reduce energy consumption and exhaust emission. Inspired by this idea, this paper presents a composite model predictive control (CMPC) strategy, which, taking advantage of distributed model predictive control architectures, combines tracking nonlinear model predictive control and economic nonlinear model predictive control metrics to keep process running smoothly and optimize operational conditions. The control ers connected with two kinds of communi-cation networks are easy to organize and maintain, and stable to process interferences. A fast solution algorithm combining interior point solvers and Newton's method is accommodated to the CMPC realization, with reason-able CPU computing time and suitable online applications. Simulation for industrial case demonstrates that the proposed approach can ensure stable operations of furnaces, improve heat efficiency, and reduce the emission effectively.

  19. Assessment of the genomic prediction accuracy for feed efficiency traits in meat-type chickens.

    Science.gov (United States)

    Liu, Tianfei; Luo, Chenglong; Wang, Jie; Ma, Jie; Shu, Dingming; Lund, Mogens Sandø; Su, Guosheng; Qu, Hao

    2017-01-01

    Feed represents the major cost of chicken production. Selection for improving feed utilization is a feasible way to reduce feed cost and greenhouse gas emissions. The objectives of this study were to investigate the efficiency of genomic prediction for feed conversion ratio (FCR), residual feed intake (RFI), average daily gain (ADG) and average daily feed intake (ADFI) and to assess the impact of selection for feed efficiency traits FCR and RFI on eviscerating percentage (EP), breast muscle percentage (BMP) and leg muscle percentage (LMP) in meat-type chickens. Genomic prediction was assessed using a 4-fold cross-validation for two validation scenarios. The first scenario was a random family sampling validation (CVF), and the second scenario was a random individual sampling validation (CVR). Variance components were estimated based on the genomic relationship built with single nucleotide polymorphism markers. Genomic estimated breeding values (GEBV) were predicted using a genomic best linear unbiased prediction model. The accuracies of GEBV were evaluated in two ways: the correlation between GEBV and corrected phenotypic value divided by the square root of heritability, i.e., the correlation-based accuracy, and model-based theoretical accuracy. Breeding values were also predicted using a conventional pedigree-based best linear unbiased prediction model in order to compare accuracies of genomic and conventional predictions. The heritability estimates of FCR and RFI were 0.29 and 0.50, respectively. The heritability estimates of ADG, ADFI, EP, BMP and LMP ranged from 0.34 to 0.53. In the CVF scenario, the correlation-based accuracy and the theoretical accuracy of genomic prediction for FCR were slightly higher than those for RFI. The correlation-based accuracies for FCR, RFI, ADG and ADFI were 0.360, 0.284, 0.574 and 0.520, respectively, and the model-based theoretical accuracies were 0.420, 0.414, 0.401 and 0.382, respectively. In the CVR scenario, the correlation

  20. Assessment of the genomic prediction accuracy for feed efficiency traits in meat-type chickens

    Science.gov (United States)

    Wang, Jie; Ma, Jie; Shu, Dingming; Lund, Mogens Sandø; Su, Guosheng; Qu, Hao

    2017-01-01

    Feed represents the major cost of chicken production. Selection for improving feed utilization is a feasible way to reduce feed cost and greenhouse gas emissions. The objectives of this study were to investigate the efficiency of genomic prediction for feed conversion ratio (FCR), residual feed intake (RFI), average daily gain (ADG) and average daily feed intake (ADFI) and to assess the impact of selection for feed efficiency traits FCR and RFI on eviscerating percentage (EP), breast muscle percentage (BMP) and leg muscle percentage (LMP) in meat-type chickens. Genomic prediction was assessed using a 4-fold cross-validation for two validation scenarios. The first scenario was a random family sampling validation (CVF), and the second scenario was a random individual sampling validation (CVR). Variance components were estimated based on the genomic relationship built with single nucleotide polymorphism markers. Genomic estimated breeding values (GEBV) were predicted using a genomic best linear unbiased prediction model. The accuracies of GEBV were evaluated in two ways: the correlation between GEBV and corrected phenotypic value divided by the square root of heritability, i.e., the correlation-based accuracy, and model-based theoretical accuracy. Breeding values were also predicted using a conventional pedigree-based best linear unbiased prediction model in order to compare accuracies of genomic and conventional predictions. The heritability estimates of FCR and RFI were 0.29 and 0.50, respectively. The heritability estimates of ADG, ADFI, EP, BMP and LMP ranged from 0.34 to 0.53. In the CVF scenario, the correlation-based accuracy and the theoretical accuracy of genomic prediction for FCR were slightly higher than those for RFI. The correlation-based accuracies for FCR, RFI, ADG and ADFI were 0.360, 0.284, 0.574 and 0.520, respectively, and the model-based theoretical accuracies were 0.420, 0.414, 0.401 and 0.382, respectively. In the CVR scenario, the correlation

  1. Efficient decoding algorithms for generalized hidden Markov model gene finders

    Directory of Open Access Journals (Sweden)

    Delcher Arthur L

    2005-01-01

    Full Text Available Abstract Background The Generalized Hidden Markov Model (GHMM has proven a useful framework for the task of computational gene prediction in eukaryotic genomes, due to its flexibility and probabilistic underpinnings. As the focus of the gene finding community shifts toward the use of homology information to improve prediction accuracy, extensions to the basic GHMM model are being explored as possible ways to integrate this homology information into the prediction process. Particularly prominent among these extensions are those techniques which call for the simultaneous prediction of genes in two or more genomes at once, thereby increasing significantly the computational cost of prediction and highlighting the importance of speed and memory efficiency in the implementation of the underlying GHMM algorithms. Unfortunately, the task of implementing an efficient GHMM-based gene finder is already a nontrivial one, and it can be expected that this task will only grow more onerous as our models increase in complexity. Results As a first step toward addressing the implementation challenges of these next-generation systems, we describe in detail two software architectures for GHMM-based gene finders, one comprising the common array-based approach, and the other a highly optimized algorithm which requires significantly less memory while achieving virtually identical speed. We then show how both of these architectures can be accelerated by a factor of two by optimizing their content sensors. We finish with a brief illustration of the impact these optimizations have had on the feasibility of our new homology-based gene finder, TWAIN. Conclusions In describing a number of optimizations for GHMM-based gene finders and making available two complete open-source software systems embodying these methods, it is our hope that others will be more enabled to explore promising extensions to the GHMM framework, thereby improving the state-of-the-art in gene prediction

  2. Efficient Multigrid Preconditioners for Anisotropic Problems in Geophysical Modelling

    CERN Document Server

    Dedner, Andreas; Scheichl, Robert

    2014-01-01

    Many problems in geophysical modelling require the efficient solution of highly anisotropic elliptic partial differential equations (PDEs) in "flat" domains. For example, in numerical weather- and climate-prediction an elliptic PDE for the pressure correction has to be solved at every time step in a thin spherical shell representing the global atmosphere. This elliptic solve can be one of the computationally most demanding components in semi-implicit semi-Lagrangian time stepping methods which are very popular as they allow for larger model time steps and better overall performance. With increasing model resolution, algorithmically efficient and scalable algorithms are essential to run the code under tight operational time constraints. We discuss the theory and practical application of bespoke geometric multigrid preconditioners for equations of this type. The algorithms deal with the strong anisotropy in the vertical direction by using the tensor-product approach originally analysed by B\\"{o}rm and Hiptmair ...

  3. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    added information. In this qualitative analysis of a statistically small number of predictions we learned (i) that soft information such as the modeller's system understanding is as important as the model itself (hard information), (ii) that the sequence of modelling steps matters (field visit, interactions between differently experienced experts, choice of model, selection of available data, and methods for parameter guessing), and (iii) that added process understanding can be as efficient as adding data for improving parameters needed to satisfy model requirements.

  4. Efficiency of Evolutionary Algorithms for Calibration of Watershed Models

    Science.gov (United States)

    Ahmadi, M.; Arabi, M.

    2009-12-01

    Since the promulgation of the Clean Water Act in the U.S. and other similar legislations around the world over the past three decades, watershed management programs have focused on the nexus of pollution prevention and mitigation. In this context, hydrologic/water quality models have been increasingly embedded in the decision making process. Simulation models are now commonly used to investigate the hydrologic response of watershed systems under varying climatic and land use conditions, and also to study the fate and transport of contaminants at various spatiotemporal scales. Adequate calibration and corroboration of models for various outputs at varying scales is an essential component of watershed modeling. The parameter estimation process could be challenging when multiple objectives are important. For example, improving streamflow predictions of the model at a stream location may result in degradation of model predictions for sediments and/or nutrient at the same location or other outlets. This paper aims to evaluate the applicability and efficiency of single and multi objective evolutionary algorithms for parameter estimation of complex watershed models. To this end, the Shuffled Complex Evolution (SCE-UA) algorithm, a single-objective genetic algorithm (GA), and a multi-objective genetic algorithm (i.e., NSGA-II) were reconciled with the Soil and Water Assessment Tool (SWAT) to calibrate the model at various locations within the Wildcat Creek Watershed, Indiana. The efficiency of these methods were investigated using different error statistics including root mean square error, coefficient of determination and Nash-Sutcliffe efficiency coefficient for the output variables as well as the baseflow component of the stream discharge. A sensitivity analysis was carried out to screening model parameters that bear significant uncertainties. Results indicated that while flow processes can be reasonably ascertained, parameterization of nutrient and pesticide processes

  5. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  6. Optimisation Modelling of Efficiency of Enterprise Restructuring

    Directory of Open Access Journals (Sweden)

    Yefimova Hanna V.

    2014-03-01

    Full Text Available The article considers issues of optimisation of the use of resources directed at restructuring of a shipbuilding enterprise, which is the main prerequisite of its efficiency. Restructuring is considered as a process of complex and interconnected change in the structure of assets, liabilities, enterprise functions, initiated by dynamic environment, which is based on the strategic concept of its development and directed at increase of efficiency of its activity, which is expressed in the growth of cost. The task of making a decision to restructure a shipbuilding enterprise and selection of a specific restructuring project refers to optimisation tasks of prospective planning. Enterprise resources that are allocated for restructuring serve as constraints of the mathematical model. Main criteria of optimisation are maximisation of pure discounted income or minimisation of expenditures on restructuring measures. The formed optimisation model is designed for assessment of volumes of attraction of own and borrowed funds for restructuring. Imitation model ensures development of cash flows. The task solution is achieved on the basis of the complex of interrelated optimisation and imitation models and procedures on formation, selection and co-ordination of managerial decisions.

  7. Two-phased DEA-MLA approach for predicting efficiency of NBA players

    Directory of Open Access Journals (Sweden)

    Radovanović Sandro

    2014-01-01

    Full Text Available In sports, a calculation of efficiency is considered to be one of the most challenging tasks. In this paper, DEA is used to evaluate an efficiency of the NBA players, based on multiple inputs and multiple outputs. The efficiency is evaluated for 26 NBA players at the guard position based on existing data. However, if we want to generate the efficiency for a new player, we would have to re-conduct the DEA analysis. Therefore, to predict the efficiency of a new player, machine learning algorithms are applied. The DEA results are incorporated as an input for the learning algorithms, defining thereby an efficiency frontier function form with high reliability. In this paper, linear regression, neural network, and support vector machines are used to predict an efficiency frontier. The results have shown that neural networks can predict the efficiency with an error less than 1%, and the linear regression with an error less than 2%.

  8. Modeling and prediction of surgical procedure times

    NARCIS (Netherlands)

    P.S. Stepaniak (Pieter); C. Heij (Christiaan); G. de Vries (Guus)

    2009-01-01

    textabstractAccurate prediction of medical operation times is of crucial importance for cost efficient operation room planning in hospitals. This paper investigates the possible dependence of procedure times on surgeon factors like age, experience, gender, and team composition. The effect of these f

  9. A Hybrid Neural Network Prediction Model of Air Ticket Sales

    Directory of Open Access Journals (Sweden)

    Han-Chen Huang

    2013-11-01

    Full Text Available Air ticket sales revenue is an important source of revenue for travel agencies, and if future air ticket sales revenue can be accurately forecast, travel agencies will be able to advance procurement to achieve a sufficient amount of cost-effective tickets. Therefore, this study applied the Artificial Neural Network (ANN and Genetic Algorithms (GA to establish a prediction model of travel agency air ticket sales revenue. By verifying the empirical data, this study proved that the established prediction model has accurate prediction power, and MAPE (mean absolute percentage error is only 9.11%. The established model can provide business operators with reliable and efficient prediction data as a reference for operational decisions.

  10. Models for short term malaria prediction in Sri Lanka

    Directory of Open Access Journals (Sweden)

    Galappaththy Gawrie NL

    2008-05-01

    Full Text Available Abstract Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed.

  11. Model for performance prediction in multi-axis machining

    CERN Document Server

    Lavernhe, Sylvain; Lartigue, Claire; 10.1007/s00170-007-1001-4

    2009-01-01

    This paper deals with a predictive model of kinematical performance in 5-axis milling within the context of High Speed Machining. Indeed, 5-axis high speed milling makes it possible to improve quality and productivity thanks to the degrees of freedom brought by the tool axis orientation. The tool axis orientation can be set efficiently in terms of productivity by considering kinematical constraints resulting from the set machine-tool/NC unit. Capacities of each axis as well as some NC unit functions can be expressed as limiting constraints. The proposed model relies on each axis displacement in the joint space of the machine-tool and predicts the most limiting axis for each trajectory segment. Thus, the calculation of the tool feedrate can be performed highlighting zones for which the programmed feedrate is not reached. This constitutes an indicator for trajectory optimization. The efficiency of the model is illustrated through examples. Finally, the model could be used for optimizing process planning.

  12. Efficient prediction of co-complexed proteins based on coevolution.

    Directory of Open Access Journals (Sweden)

    Damien M de Vienne

    Full Text Available The prediction of the network of protein-protein interactions (PPI of an organism is crucial for the understanding of biological processes and for the development of new drugs. Machine learning methods have been successfully applied to the prediction of PPI in yeast by the integration of multiple direct and indirect biological data sources. However, experimental data are not available for most organisms. We propose here an ensemble machine learning approach for the prediction of PPI that depends solely on features independent from experimental data. We developed new estimators of the coevolution between proteins and combined them in an ensemble learning procedure.We applied this method to a dataset of known co-complexed proteins in Escherichia coli and compared it to previously published methods. We show that our method allows prediction of PPI with an unprecedented precision of 95.5% for the first 200 sorted pairs of proteins compared to 28.5% on the same dataset with the previous best method.A close inspection of the best predicted pairs allowed us to detect new or recently discovered interactions between chemotactic components, the flagellar apparatus and RNA polymerase complexes in E. coli.

  13. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...

  14. Efficient Model-Based Diagnosis Engine

    Science.gov (United States)

    Fijany, Amir; Vatan, Farrokh; Barrett, Anthony; James, Mark; Mackey, Ryan; Williams, Colin

    2009-01-01

    An efficient diagnosis engine - a combination of mathematical models and algorithms - has been developed for identifying faulty components in a possibly complex engineering system. This model-based diagnosis engine embodies a twofold approach to reducing, relative to prior model-based diagnosis engines, the amount of computation needed to perform a thorough, accurate diagnosis. The first part of the approach involves a reconstruction of the general diagnostic engine to reduce the complexity of the mathematical-model calculations and of the software needed to perform them. The second part of the approach involves algorithms for computing a minimal diagnosis (the term "minimal diagnosis" is defined below). A somewhat lengthy background discussion is prerequisite to a meaningful summary of the innovative aspects of the present efficient model-based diagnosis engine. In model-based diagnosis, the function of each component and the relationships among all the components of the engineering system to be diagnosed are represented as a logical system denoted the system description (SD). Hence, the expected normal behavior of the engineering system is the set of logical consequences of the SD. Faulty components lead to inconsistencies between the observed behaviors of the system and the SD (see figure). Diagnosis - the task of finding faulty components - is reduced to finding those components, the abnormalities of which could explain all the inconsistencies. The solution of the diagnosis problem should be a minimal diagnosis, which is a minimal set of faulty components. A minimal diagnosis stands in contradistinction to the trivial solution, in which all components are deemed to be faulty, and which, therefore, always explains all inconsistencies.

  15. Massive Predictive Modeling using Oracle R Enterprise

    CERN Document Server

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  16. Application of Improved Grey Prediction Model to Petroleum Cost Forecasting

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The grey theory is a multidisciplinary and generic theory that deals with systems that lack adequate information and/or have only poor information. In this paper, an improved grey model using step function was proposed.Petroleum cost forecast of the Henan oil field was used as the case study to test the efficiency and accuracy of the proposed method. According to the experimental results, the proposed method obviously could improve the prediction accuracy of the original grey model.

  17. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    Science.gov (United States)

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  18. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  8. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  9. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  10. Land-ice modeling for sea-level prediction

    Energy Technology Data Exchange (ETDEWEB)

    Lipscomb, William H [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2010-06-11

    There has been major progress in ice sheet modeling since IPCC AR4. We will soon have efficient higherorder ice sheet models that can run at ",1 km resolution for entire ice sheets, either standalone or coupled to GeMs. These models should significantly reduce uncertainties in sea-level predictions. However, the least certain and potentially greatest contributions to 21st century sea-level rise may come from ice-ocean interactions, especially in West Antarctica. This is a coupled modeling problem that requires collaboration among ice, ocean and atmosphere modelers.

  11. An Occupant Behavior Model for Building Energy Efficiency and Safety

    Science.gov (United States)

    Pan, L. L.; Chen, T.; Jia, Q. S.; Yuan, R. X.; Wang, H. T.; Ding, R.

    2010-05-01

    An occupant behavior model is suggested to improve building energy efficiency and safety. This paper provides a generic outline of the model, which includes occupancy behavior abstraction, model framework and primary structure, input and output, computer simulation results as well as summary and outlook. Using information technology, now it's possible to collect large amount of information of occupancy. Yet this can only provide partial and historical information, so it's important to develop a model to have full view of the researched building as well as prediction. We used the infrared monitoring system which is set at the front door of the Low Energy Demo Building (LEDB) at Tsinghua University in China, to provide the time variation of the total number of occupants in the LEDB building. This information is used as input data for the model. While the RFID system is set on the 1st floor, which provides the time variation of the occupants' localization in each region. The collected data are used to validate the model. The simulation results show that this presented model provides a feasible framework to simulate occupants' behavior and predict the time variation of the number of occupants in the building. Further development and application of the model is also discussed.

  12. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  13. An accurate and efficient numerical framework for adaptive numerical weather prediction

    CERN Document Server

    Tumolo, G

    2014-01-01

    We present an accurate and efficient discretization approach for the adaptive discretization of typical model equations employed in numerical weather prediction. A semi-Lagrangian approach is combined with the TR-BDF2 semi-implicit time discretization method and with a spatial discretization based on adaptive discontinuous finite elements. The resulting method has full second order accuracy in time and can employ polynomial bases of arbitrarily high degree in space, is unconditionally stable and can effectively adapt the number of degrees of freedom employed in each element, in order to balance accuracy and computational cost. The p-adaptivity approach employed does not require remeshing, therefore it is especially suitable for applications, such as numerical weather prediction, in which a large number of physical quantities are associated with a given mesh. Furthermore, although the proposed method can be implemented on arbitrary unstructured and nonconforming meshes, even its application on simple Cartesian...

  14. An efficient artificial bee colony algorithm with application to nonlinear predictive control

    Science.gov (United States)

    Ait Sahed, Oussama; Kara, Kamel; Benyoucef, Abousoufyane; Laid Hadjili, Mohamed

    2016-05-01

    In this paper a constrained nonlinear predictive control algorithm, that uses the artificial bee colony (ABC) algorithm to solve the optimization problem, is proposed. The main objective is to derive a simple and efficient control algorithm that can solve the nonlinear constrained optimization problem with minimal computational time. Indeed, a modified version, enhancing the exploring and the exploitation capabilities, of the ABC algorithm is proposed and used to design a nonlinear constrained predictive controller. This version allows addressing the premature and the slow convergence drawbacks of the standard ABC algorithm, using a modified search equation, a well-known organized distribution mechanism for the initial population and a new equation for the limit parameter. A convergence statistical analysis of the proposed algorithm, using some well-known benchmark functions is presented and compared with several other variants of the ABC algorithm. To demonstrate the efficiency of the proposed algorithm in solving engineering problems, the constrained nonlinear predictive control of the model of a Multi-Input Multi-Output industrial boiler is considered. The control performances of the proposed ABC algorithm-based controller are also compared to those obtained using some variants of the ABC algorithms.

  15. A Course in... Model Predictive Control.

    Science.gov (United States)

    Arkun, Yaman; And Others

    1988-01-01

    Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)

  16. Equivalency and unbiasedness of grey prediction models

    Institute of Scientific and Technical Information of China (English)

    Bo Zeng; Chuan Li; Guo Chen; Xianjun Long

    2015-01-01

    In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction mo-dels, the equivalence and unbiasedness of grey prediction mo-dels are analyzed and verified. The results show that al the grey prediction models that are strictly derived from x(0)(k) +az(1)(k) = b have the identical model structure and simulation precision. Moreover, the unbiased simulation for the homoge-neous exponential sequence can be accomplished. However, the models derived from dx(1)/dt+ax(1) =b are only close to those derived from x(0)(k)+az(1)(k)=b provided that|a|has to satisfy|a| < 0.1; neither could the unbiased simulation for the homoge-neous exponential sequence be achieved. The above conclusions are proved and verified through some theorems and examples.

  17. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  18. Predicting high harmonic ion cyclotron heating efficiency in Tokamak plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Green, David L [ORNL; Jaeger, E. F. [XCEL; Berry, Lee A [ORNL; Chen, Guangye [ORNL; Ryan, Philip Michael [ORNL; Canik, John [ORNL

    2011-01-01

    Observations of improved radio frequency (RF) heating efficiency in high-confinement (H-) mode plasmas on the National Spherical Tokamak Experiment (NSTX) are investigated by whole-device linear simulation. We present the first full-wave simulation to couple kinetic physics of the well confined core plasma to the poorly confined scrape-off plasma. The new simulation is used to scan the launched fast-wave spectrum and examine the steady-state electric wave field structure for experimental scenarios corresponding to both reduced, and improved RF heating efficiency. We find that launching toroidal wave-numbers that required for fast-wave propagation excites large amplitude (kVm 1 ) coaxial standing modes in the wave electric field between the confined plasma density pedestal and conducting vessel wall. Qualitative comparison with measurements of the stored plasma energy suggest these modes are a probable cause of degraded heating efficiency. Also, the H-mode density pedestal and fast-wave cutoff within the confined plasma allow for the excitation of whispering gallery type eigenmodes localised to the plasma edge.

  19. Satellite-based terrestrial production efficiency modeling

    Directory of Open Access Journals (Sweden)

    Obersteiner Michael

    2009-09-01

    Full Text Available Abstract Production efficiency models (PEMs are based on the theory of light use efficiency (LUE which states that a relatively constant relationship exists between photosynthetic carbon uptake and radiation receipt at the canopy level. Challenges remain however in the application of the PEM methodology to global net primary productivity (NPP monitoring. The objectives of this review are as follows: 1 to describe the general functioning of six PEMs (CASA; GLO-PEM; TURC; C-Fix; MOD17; and BEAMS identified in the literature; 2 to review each model to determine potential improvements to the general PEM methodology; 3 to review the related literature on satellite-based gross primary productivity (GPP and NPP modeling for additional possibilities for improvement; and 4 based on this review, propose items for coordinated research. This review noted a number of possibilities for improvement to the general PEM architecture - ranging from LUE to meteorological and satellite-based inputs. Current PEMs tend to treat the globe similarly in terms of physiological and meteorological factors, often ignoring unique regional aspects. Each of the existing PEMs has developed unique methods to estimate NPP and the combination of the most successful of these could lead to improvements. It may be beneficial to develop regional PEMs that can be combined under a global framework. The results of this review suggest the creation of a hybrid PEM could bring about a significant enhancement to the PEM methodology and thus terrestrial carbon flux modeling. Key items topping the PEM research agenda identified in this review include the following: LUE should not be assumed constant, but should vary by plant functional type (PFT or photosynthetic pathway; evidence is mounting that PEMs should consider incorporating diffuse radiation; continue to pursue relationships between satellite-derived variables and LUE, GPP and autotrophic respiration (Ra; there is an urgent need for

  20. Hybrid modeling and prediction of dynamical systems

    Science.gov (United States)

    Lloyd, Alun L.; Flores, Kevin B.

    2017-01-01

    Scientific analysis often relies on the ability to make accurate predictions of a system’s dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model’s equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data. PMID:28692642

  1. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children.

  2. Development and application of chronic disease risk prediction models.

    Science.gov (United States)

    Oh, Sun Min; Stefani, Katherine M; Kim, Hyeon Chang

    2014-07-01

    Currently, non-communicable chronic diseases are a major cause of morbidity and mortality worldwide, and a large proportion of chronic diseases are preventable through risk factor management. However, the prevention efficacy at the individual level is not yet satisfactory. Chronic disease prediction models have been developed to assist physicians and individuals in clinical decision-making. A chronic disease prediction model assesses multiple risk factors together and estimates an absolute disease risk for the individual. Accurate prediction of an individual's future risk for a certain disease enables the comparison of benefits and risks of treatment, the costs of alternative prevention strategies, and selection of the most efficient strategy for the individual. A large number of chronic disease prediction models, especially targeting cardiovascular diseases and cancers, have been suggested, and some of them have been adopted in the clinical practice guidelines and recommendations of many countries. Although few chronic disease prediction tools have been suggested in the Korean population, their clinical utility is not as high as expected. This article reviews methodologies that are commonly used for developing and evaluating a chronic disease prediction model and discusses the current status of chronic disease prediction in Korea.

  3. Property predictions using microstructural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K.G. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)]. E-mail: wangk2@rpi.edu; Guo, Z. [Sente Software Ltd., Surrey Technology Centre, 40 Occam Road, Guildford GU2 7YG (United Kingdom); Sha, W. [Metals Research Group, School of Civil Engineering, Architecture and Planning, The Queen' s University of Belfast, Belfast BT7 1NN (United Kingdom); Glicksman, M.E. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States); Rajan, K. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)

    2005-07-15

    Precipitation hardening in an Fe-12Ni-6Mn maraging steel during overaging is quantified. First, applying our recent kinetic model of coarsening [Phys. Rev. E, 69 (2004) 061507], and incorporating the Ashby-Orowan relationship, we link quantifiable aspects of the microstructures of these steels to their mechanical properties, including especially the hardness. Specifically, hardness measurements allow calculation of the precipitate size as a function of time and temperature through the Ashby-Orowan relationship. Second, calculated precipitate sizes and thermodynamic data determined with Thermo-Calc[copyright] are used with our recent kinetic coarsening model to extract diffusion coefficients during overaging from hardness measurements. Finally, employing more accurate diffusion parameters, we determined the hardness of these alloys independently from theory, and found agreement with experimental hardness data. Diffusion coefficients determined during overaging of these steels are notably higher than those found during the aging - an observation suggesting that precipitate growth during aging and precipitate coarsening during overaging are not controlled by the same diffusion mechanism.

  4. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  5. Modeling Interconnect Variability Using Efficient Parametric Model Order Reduction

    CERN Document Server

    Li, Peng; Li, Xin; Pileggi, Lawrence T; Nassif, Sani R

    2011-01-01

    Assessing IC manufacturing process fluctuations and their impacts on IC interconnect performance has become unavoidable for modern DSM designs. However, the construction of parametric interconnect models is often hampered by the rapid increase in computational cost and model complexity. In this paper we present an efficient yet accurate parametric model order reduction algorithm for addressing the variability of IC interconnect performance. The efficiency of the approach lies in a novel combination of low-rank matrix approximation and multi-parameter moment matching. The complexity of the proposed parametric model order reduction is as low as that of a standard Krylov subspace method when applied to a nominal system. Under the projection-based framework, our algorithm also preserves the passivity of the resulting parametric models.

  6. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  7. Precision Plate Plan View Pattern Predictive Model

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yang; YANG Quan; HE An-rui; WANG Xiao-chen; ZHANG Yun

    2011-01-01

    According to the rolling features of plate mill, a 3D elastic-plastic FEM (finite element model) based on full restart method of ANSYS/LS-DYNA was established to study the inhomogeneous plastic deformation of multipass plate rolling. By analyzing the simulation results, the difference of head and tail ends predictive models was found and modified. According to the numerical simulation results of 120 different kinds of conditions, precision plate plan view pattern predictive model was established. Based on these models, the sizing MAS (mizushima automatic plan view pattern control system) method was designed and used on a 2 800 mm plate mill. Comparing the rolled plates with and without PVPP (plan view pattern predictive) model, the reduced width deviation indicates that the olate !olan view Dattern predictive model is preeise.

  8. Nonconvex model predictive control for commercial refrigeration

    Science.gov (United States)

    Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John

    2013-08-01

    We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.

  9. A study on an efficient prediction of welding deformation for T-joint laser welding of sandwich panel Part II : Proposal of a method to use shell element model

    Directory of Open Access Journals (Sweden)

    Kim Jae Woong

    2014-06-01

    Full Text Available I-core sandwich panel that has been used more widely is assembled using high power CO₂laser welding. Kim et al. (2013 proposed a circular cone type heat source model for the T-joint laser welding between face plate and core. It can cover the negative defocus which is commonly adopted in T-joint laser welding to provide deeper penetration. In part I, a volumetric heat source model is proposed and it is verified thorough a comparison of melting zone on the cross section with experiment results. The proposed model can be used for heat transfer analysis and thermal elasto-plastic analysis to predict welding deformation that occurs during laser welding. In terms of computational time, since the thermal elasto-plastic analysis using 3D solid elements is quite time consuming, shell element model with multi-layers have been employed instead. However, the conventional layered approach is not appropriate for the application of heat load at T-Joint. This paper, Part II, suggests a new method to arrange different number of layers for face plate and core in order to impose heat load only to the face plate.

  10. A study on an efficient prediction of welding deformation for T-joint laser welding of sandwich panel Part II : Proposal of a method to use shell element model

    Science.gov (United States)

    Kim, Jae Woong; Jang, Beom Seon; Kang, Sung Wook

    2014-06-01

    I-core sandwich panel that has been used more widely is assembled using high power CO-laser welding. Kim et al. (2013) proposed a circular cone type heat source model for the T-joint laser welding between face plate and core. It can cover the negative defocus which is commonly adopted in T-joint laser welding to provide deeper penetration. In part I, a volumetric heat source model is proposed and it is verified thorough a comparison of melting zone on the cross section with experiment results. The proposed model can be used for heat transfer analysis and thermal elasto-plastic analysis to predict welding deformation that occurs during laser welding. In terms of computational time, since the thermal elasto-plastic analysis using 3D solid elements is quite time consuming, shell element model with multi-layers have been employed instead. However, the conventional layered approach is not appropriate for the application of heat load at T-Joint. This paper, Part II, suggests a new method to arrange different number of layers for face plate and core in order to impose heat load only to the face plate.

  11. NBC Hazard Prediction Model Capability Analysis

    Science.gov (United States)

    1999-09-01

    Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented

  12. Improving Environmental Model Calibration and Prediction

    Science.gov (United States)

    2011-01-18

    groundwater model calibration. Adv. Water Resour., 29(4):605–623, 2006. [9] B.E. Skahill, J.S. Baggett, S. Frankenstein , and C.W. Downer. More efficient...of Hydrology, Environmental Modelling & Software, or Water Resources Research). Skahill, B., Baggett, J., Frankenstein , S., and Downer, C.W. (2009

  13. Efficient 3D scene modeling and mosaicing

    CERN Document Server

    Nicosevici, Tudor

    2013-01-01

    This book proposes a complete pipeline for monocular (single camera) based 3D mapping of terrestrial and underwater environments. The aim is to provide a solution to large-scale scene modeling that is both accurate and efficient. To this end, we have developed a novel Structure from Motion algorithm that increases mapping accuracy by registering camera views directly with the maps. The camera registration uses a dual approach that adapts to the type of environment being mapped.   In order to further increase the accuracy of the resulting maps, a new method is presented, allowing detection of images corresponding to the same scene region (crossovers). Crossovers then used in conjunction with global alignment methods in order to highly reduce estimation errors, especially when mapping large areas. Our method is based on Visual Bag of Words paradigm (BoW), offering a more efficient and simpler solution by eliminating the training stage, generally required by state of the art BoW algorithms.   Also, towards dev...

  14. An Efficient Interval Type-2 Fuzzy CMAC for Chaos Time-Series Prediction and Synchronization.

    Science.gov (United States)

    Lee, Ching-Hung; Chang, Feng-Yu; Lin, Chih-Min

    2014-03-01

    This paper aims to propose a more efficient control algorithm for chaos time-series prediction and synchronization. A novel type-2 fuzzy cerebellar model articulation controller (T2FCMAC) is proposed. In some special cases, this T2FCMAC can be reduced to an interval type-2 fuzzy neural network, a fuzzy neural network, and a fuzzy cerebellar model articulation controller (CMAC). So, this T2FCMAC is a more generalized network with better learning ability, thus, it is used for the chaos time-series prediction and synchronization. Moreover, this T2FCMAC realizes the un-normalized interval type-2 fuzzy logic system based on the structure of the CMAC. It can provide better capabilities for handling uncertainty and more design degree of freedom than traditional type-1 fuzzy CMAC. Unlike most of the interval type-2 fuzzy system, the type-reduction of T2FCMAC is bypassed due to the property of un-normalized interval type-2 fuzzy logic system. This causes T2FCMAC to have lower computational complexity and is more practical. For chaos time-series prediction and synchronization applications, the training architectures with corresponding convergence analyses and optimal learning rates based on Lyapunov stability approach are introduced. Finally, two illustrated examples are presented to demonstrate the performance of the proposed T2FCMAC.

  15. An Efficient Hydrodynamic Model for Surface Waves

    Institute of Scientific and Technical Information of China (English)

    WANG Kun; JIN Sheng; LU Gang

    2009-01-01

    In the present study,a semi-implicit finite difference model for non-bydrostatic,free-surface flows is analyzed and discussed.The governing equations are the three-dimensional free-surface Reynolds-averaged Navier-Stokes equations defined on a general,irregular domain of arbitrary scale.At outflow,a combination of a sponge layer technique and a radiation boundary condition is applied to minimize wave reflection.The equations are solved with the fractional step method where the hydrostatic pressure component is determined first,while the non-hydrostatic component of the pressure is computed from the pressure Poisson equation in which the coefficient matrix is positive definite and symmetric.The advectiou and horizontal viscosity terms are discretized by use of a semi-Lagrangian approach.The resulting model is computationally efficient and unrestricted to the CFL condition.The developed model is verified against analytical solutions and experimental data,with excellent agreement.

  16. Corporate prediction models, ratios or regression analysis?

    NARCIS (Netherlands)

    Bijnen, E.J.; Wijn, M.F.C.M.

    1994-01-01

    The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in

  17. Modelling Chemical Reasoning to Predict Reactions

    CERN Document Server

    Segler, Marwin H S

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180,000 randomly selected binary reactions. We show that our data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-) discovering novel transformations (even including transition-metal catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph, and because each single reaction prediction is typically ac...

  18. A CHAID Based Performance Prediction Model in Educational Data Mining

    Directory of Open Access Journals (Sweden)

    R. Bhaskaran

    2010-01-01

    Full Text Available The performance in higher secondary school education in India is a turning point in the academic lives of all students. As this academic performance is influenced by many factors, it is essential to develop predictive data mining model for students' performance so as to identify the slow learners and study the influence of the dominant factors on their academic performance. In the present investigation, a survey cum experimental methodology was adopted to generate a database and it was constructed from a primary and a secondary source. While the primary data was collected from the regular students, the secondary data was gathered from the school and office of the Chief Educational Officer (CEO. A total of 1000 datasets of the year 2006 from five different schools in three different districts of Tamilnadu were collected. The raw data was preprocessed in terms of filling up missing values, transforming values in one form into another and relevant attribute/ variable selection. As a result, we had 772 student records, which were used for CHAID prediction model construction. A set of prediction rules were extracted from CHIAD prediction model and the efficiency of the generated CHIAD prediction model was found. The accuracy of the present model was compared with other model and it has been found to be satisfactory.

  19. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  20. Economic model predictive control theory, formulations and chemical process applications

    CERN Document Server

    Ellis, Matthew; Christofides, Panagiotis D

    2017-01-01

    This book presents general methods for the design of economic model predictive control (EMPC) systems for broad classes of nonlinear systems that address key theoretical and practical considerations including recursive feasibility, closed-loop stability, closed-loop performance, and computational efficiency. Specifically, the book proposes: Lyapunov-based EMPC methods for nonlinear systems; two-tier EMPC architectures that are highly computationally efficient; and EMPC schemes handling explicitly uncertainty, time-varying cost functions, time-delays and multiple-time-scale dynamics. The proposed methods employ a variety of tools ranging from nonlinear systems analysis, through Lyapunov-based control techniques to nonlinear dynamic optimization. The applicability and performance of the proposed methods are demonstrated through a number of chemical process examples. The book presents state-of-the-art methods for the design of economic model predictive control systems for chemical processes. In addition to being...

  1. Genetic models of homosexuality: generating testable predictions

    OpenAIRE

    Gavrilets, Sergey; Rice, William R.

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality inclu...

  2. Wind farm production prediction - The Zephyr model

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)

    2002-06-01

    This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)

  3. Allostasis: a model of predictive regulation.

    Science.gov (United States)

    Sterling, Peter

    2012-04-12

    The premise of the standard regulatory model, "homeostasis", is flawed: the goal of regulation is not to preserve constancy of the internal milieu. Rather, it is to continually adjust the milieu to promote survival and reproduction. Regulatory mechanisms need to be efficient, but homeostasis (error-correction by feedback) is inherently inefficient. Thus, although feedbacks are certainly ubiquitous, they could not possibly serve as the primary regulatory mechanism. A newer model, "allostasis", proposes that efficient regulation requires anticipating needs and preparing to satisfy them before they arise. The advantages: (i) errors are reduced in magnitude and frequency; (ii) response capacities of different components are matched -- to prevent bottlenecks and reduce safety factors; (iii) resources are shared between systems to minimize reserve capacities; (iv) errors are remembered and used to reduce future errors. This regulatory strategy requires a dedicated organ, the brain. The brain tracks multitudinous variables and integrates their values with prior knowledge to predict needs and set priorities. The brain coordinates effectors to mobilize resources from modest bodily stores and enforces a system of flexible trade-offs: from each organ according to its ability, to each organ according to its need. The brain also helps regulate the internal milieu by governing anticipatory behavior. Thus, an animal conserves energy by moving to a warmer place - before it cools, and it conserves salt and water by moving to a cooler one before it sweats. The behavioral strategy requires continuously updating a set of specific "shopping lists" that document the growing need for each key component (warmth, food, salt, water). These appetites funnel into a common pathway that employs a "stick" to drive the organism toward filling the need, plus a "carrot" to relax the organism when the need is satisfied. The stick corresponds broadly to the sense of anxiety, and the carrot broadly to

  4. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    Science.gov (United States)

    Curtis, Gary P.; Lu, Dan; Ye, Ming

    2015-01-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  5. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  6. Predictive model for segmented poly(urea

    Directory of Open Access Journals (Sweden)

    Frankl P.

    2012-08-01

    Full Text Available Segmented poly(urea has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM – a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.

  7. Hurst exponent and prediction based on weak-form efficient market hypothesis of stock markets

    Science.gov (United States)

    Eom, Cheoljun; Choi, Sunghoon; Oh, Gabjin; Jung, Woo-Sung

    2008-07-01

    We empirically investigated the relationships between the degree of efficiency and the predictability in financial time-series data. The Hurst exponent was used as the measurement of the degree of efficiency, and the hit rate calculated from the nearest-neighbor prediction method was used for the prediction of the directions of future price changes. We used 60 market indexes of various countries. We empirically discovered that the relationship between the degree of efficiency (the Hurst exponent) and the predictability (the hit rate) is strongly positive. That is, a market index with a higher Hurst exponent tends to have a higher hit rate. These results suggested that the Hurst exponent is useful for predicting future price changes. Furthermore, we also discovered that the Hurst exponent and the hit rate are useful as standards that can distinguish emerging capital markets from mature capital markets.

  8. Emulation Modeling with Bayesian Networks for Efficient Decision Support

    Science.gov (United States)

    Fienen, M. N.; Masterson, J.; Plant, N. G.; Gutierrez, B. T.; Thieler, E. R.

    2012-12-01

    Bayesian decision networks (BDN) have long been used to provide decision support in systems that require explicit consideration of uncertainty; applications range from ecology to medical diagnostics and terrorism threat assessments. Until recently, however, few studies have applied BDNs to the study of groundwater systems. BDNs are particularly useful for representing real-world system variability by synthesizing a range of hydrogeologic situations within a single simulation. Because BDN output is cast in terms of probability—an output desired by decision makers—they explicitly incorporate the uncertainty of a system. BDNs can thus serve as a more efficient alternative to other uncertainty characterization methods such as computationally demanding Monte Carlo analyses and others methods restricted to linear model analyses. We present a unique application of a BDN to a groundwater modeling analysis of the hydrologic response of Assateague Island, Maryland to sea-level rise. Using both input and output variables of the modeled groundwater response to different sea-level (SLR) rise scenarios, the BDN predicts the probability of changes in the depth to fresh water, which exerts an important influence on physical and biological island evolution. Input variables included barrier-island width, maximum island elevation, and aquifer recharge. The variability of these inputs and their corresponding outputs are sampled along cross sections in a single model run to form an ensemble of input/output pairs. The BDN outputs, which are the posterior distributions of water table conditions for the sea-level rise scenarios, are evaluated through error analysis and cross-validation to assess both fit to training data and predictive power. The key benefit for using BDNs in groundwater modeling analyses is that they provide a method for distilling complex model results into predictions with associated uncertainty, which is useful to decision makers. Future efforts incorporate

  9. Predicting the disinfection efficiency range in chlorine contact tanks through a CFD-based approach.

    Science.gov (United States)

    Angeloudis, Athanasios; Stoesser, Thorsten; Falconer, Roger A

    2014-09-01

    In this study three-dimensional computational fluid dynamics (CFD) models, incorporating appropriately selected kinetic models, were developed to simulate the processes of chlorine decay, pathogen inactivation and the formation of potentially carcinogenic by-products in disinfection contact tanks (CTs). Currently, the performance of CT facilities largely relies on Hydraulic Efficiency Indicators (HEIs), extracted from experimentally derived Residence Time Distribution (RTD) curves. This approach has more recently been aided with the application of CFD models, which can be calibrated to predict accurately RTDs, enabling the assessment of disinfection facilities prior to their construction. However, as long as it depends on HEIs, the CT design process does not directly take into consideration the disinfection biochemistry which needs to be optimized. The main objective of this study is to address this issue by refining the modelling practices to simulate some reactive processes of interest, while acknowledging the uneven contact time stemming from the RTD curves. Initially, the hydraulic performances of seven CT design variations were reviewed through available experimental and computational data. In turn, the same design configurations were tested using numerical modelling techniques, featuring kinetic models that enable the quantification of disinfection operational parameters. Results highlight that the optimization of the hydrodynamic conditions facilitates a more uniform disinfectant contact time, which correspond to greater levels of pathogen inactivation and a more controlled by-product accumulation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Nonlinear predictive control for durability enhancement and efficiency improvement in a fuel cell power system

    Science.gov (United States)

    Luna, Julio; Jemei, Samir; Yousfi-Steiner, Nadia; Husar, Attila; Serra, Maria; Hissel, Daniel

    2016-10-01

    In this work, a nonlinear model predictive control (NMPC) strategy is proposed to improve the efficiency and enhance the durability of a proton exchange membrane fuel cell (PEMFC) power system. The PEMFC controller is based on a distributed parameters model that describes the nonlinear dynamics of the system, considering spatial variations along the gas channels. Parasitic power from different system auxiliaries is considered, including the main parasitic losses which are those of the compressor. A nonlinear observer is implemented, based on the discretised model of the PEMFC, to estimate the internal states. This information is included in the cost function of the controller to enhance the durability of the system by means of avoiding local starvation and inappropriate water vapour concentrations. Simulation results are presented to show the performance of the proposed controller over a given case study in an automotive application (New European Driving Cycle). With the aim of representing the most relevant phenomena that affects the PEMFC voltage, the simulation model includes a two-phase water model and the effects of liquid water on the catalyst active area. The control model is a simplified version that does not consider two-phase water dynamics.

  11. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  12. Predictive QSAR modeling of phosphodiesterase 4 inhibitors.

    Science.gov (United States)

    Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr

    2012-02-01

    A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. A Computationally-Efficient Numerical Model to Characterize the Noise Behavior of Metal-Framed Walls

    OpenAIRE

    Arjunan, Arun; Wang, Chang; English, Martin; Stanford, Mark; Lister, Paul

    2015-01-01

    Architects, designers, and engineers are making great efforts to design acoustically-efficient metal-framed walls, minimizing acoustic bridging. Therefore, efficient simulation models to predict the acoustic insulation complying with ISO 10140 are needed at a design stage. In order to achieve this, a numerical model consisting of two fluid-filled reverberation chambers, partitioned using a metal-framed wall, is to be simulated at one-third-octaves. This produces a large simulation model consi...

  14. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-02-01

    Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  15. Large eddy simulation subgrid model for soot prediction

    Science.gov (United States)

    El-Asrag, Hossam Abd El-Raouf Mostafa

    Soot prediction in realistic systems is one of the most challenging problems in theoretical and applied combustion. Soot formation as a chemical process is very complicated and not fully understood. The major difficulty stems from the chemical complexity of the soot formation process as well as its strong coupling with the other thermochemical and fluid processes that occur simultaneously. Soot is a major byproduct of incomplete combustion, having a strong impact on the environment as well as the combustion efficiency. Therefore, innovative methods is needed to predict soot in realistic configurations in an accurate and yet computationally efficient way. In the current study, a new soot formation subgrid model is developed and reported here. The new model is designed to be used within the context of the Large Eddy Simulation (LES) framework, combined with Linear Eddy Mixing (LEM) as a subgrid combustion model. The final model can be applied equally to premixed and non-premixed flames over any required geometry and flow conditions in the free, the transition, and the continuum regimes. The soot dynamics is predicted using a Method of Moments approach with Lagrangian Interpolative Closure (MOMIC) for the fractional moments. Since no prior knowledge of the particles distribution is required, the model is generally applicable. The current model accounts for the basic soot transport phenomena as transport by molecular diffusion and Thermophoretic forces. The model is first validated against experimental results for non-sooting swirling non-premixed and partially premixed flames. Next, a set of canonical premixed sooting flames are simulated, where the effect of turbulence, binary diffusivity and C/O ratio on soot formation are studied. Finally, the model is validated against a non-premixed jet sooting flame. The effect of the flame structure on the different soot formation stages as well as the particle size distribution is described. Good results are predicted with

  16. Modelling language evolution: Examples and predictions.

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  17. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  18. Acoustic Prediction Methodology and Test Validation for an Efficient Low-Noise Hybrid Wing Body Subsonic Transport

    Science.gov (United States)

    Kawai, Ronald T. (Compiler)

    2011-01-01

    This investigation was conducted to: (1) Develop a hybrid wing body subsonic transport configuration with noise prediction methods to meet the circa 2007 NASA Subsonic Fixed Wing (SFW) N+2 noise goal of -52 dB cum relative to FAR 36 Stage 3 (-42 dB cum re: Stage 4) while achieving a -25% fuel burned compared to current transports (re :B737/B767); (2) Develop improved noise prediction methods for ANOPP2 for use in predicting FAR 36 noise; (3) Design and fabricate a wind tunnel model for testing in the LaRC 14 x 22 ft low speed wind tunnel to validate noise predictions and determine low speed aero characteristics for an efficient low noise Hybrid Wing Body configuration. A medium wide body cargo freighter was selected to represent a logical need for an initial operational capability in the 2020 time frame. The Efficient Low Noise Hybrid Wing Body (ELNHWB) configuration N2A-EXTE was evolved meeting the circa 2007 NRA N+2 fuel burn and noise goals. The noise estimates were made using improvements in jet noise shielding and noise shielding prediction methods developed by UC Irvine and MIT. From this the Quiet Ultra Integrated Efficient Test Research Aircraft #1 (QUIET-R1) 5.8% wind tunnel model was designed and fabricated.

  19. Global Solar Dynamo Models: Simulations and Predictions

    Indian Academy of Sciences (India)

    Mausumi Dikpati; Peter A. Gilman

    2008-03-01

    Flux-transport type solar dynamos have achieved considerable success in correctly simulating many solar cycle features, and are now being used for prediction of solar cycle timing and amplitude.We first define flux-transport dynamos and demonstrate how they work. The essential added ingredient in this class of models is meridional circulation, which governs the dynamo period and also plays a crucial role in determining the Sun’s memory about its past magnetic fields.We show that flux-transport dynamo models can explain many key features of solar cycles. Then we show that a predictive tool can be built from this class of dynamo that can be used to predict mean solar cycle features by assimilating magnetic field data from previous cycles.

  20. Model Predictive Control of Sewer Networks

    Science.gov (United States)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik; Poulsen, Niels K.; Falk, Anne K. V.

    2017-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and controlled have thus become essential factors for effcient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control.

  1. DKIST Polarization Modeling and Performance Predictions

    Science.gov (United States)

    Harrington, David

    2016-05-01

    Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration

  2. Modelling Chemical Reasoning to Predict Reactions

    OpenAIRE

    Segler, Marwin H. S.; Waller, Mark P.

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outpe...

  3. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert; Knox, James

    2016-01-01

    Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  4. Raman Model Predicting Hardness of Covalent Crystals

    OpenAIRE

    Zhou, Xiang-Feng; Qian, Quang-Rui; Sun, Jian; Tian, Yongjun; Wang, Hui-Tian

    2009-01-01

    Based on the fact that both hardness and vibrational Raman spectrum depend on the intrinsic property of chemical bonds, we propose a new theoretical model for predicting hardness of a covalent crystal. The quantitative relationship between hardness and vibrational Raman frequencies deduced from the typical zincblende covalent crystals is validated to be also applicable for the complex multicomponent crystals. This model enables us to nondestructively and indirectly characterize the hardness o...

  5. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts

  6. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  7. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  8. Prediction modelling for population conviction data

    NARCIS (Netherlands)

    Tollenaar, N.

    2017-01-01

    In this thesis, the possibilities of using prediction models for judicial penal case data are investigated. The development and refinement of a risk taxation scale based on these data is discussed. When false positives are weighted equally severe as false negatives, 70% can be classified correctly.

  9. A Predictive Model for MSSW Student Success

    Science.gov (United States)

    Napier, Angela Michele

    2011-01-01

    This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…

  10. Predictability of extreme values in geophysical models

    NARCIS (Netherlands)

    Sterk, A.E.; Holland, M.P.; Rabassa, P.; Broer, H.W.; Vitolo, R.

    2012-01-01

    Extreme value theory in deterministic systems is concerned with unlikely large (or small) values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical model

  11. A revised prediction model for natural conception

    NARCIS (Netherlands)

    Bensdorp, A.J.; Steeg, J.W. van der; Steures, P.; Habbema, J.D.; Hompes, P.G.; Bossuyt, P.M.; Veen, F. van der; Mol, B.W.; Eijkemans, M.J.; Kremer, J.A.M.; et al.,

    2017-01-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis

  12. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  13. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  14. Leptogenesis in minimal predictive seesaw models

    CERN Document Server

    Björkeroth, Fredrik; Varzielas, Ivo de Medeiros; King, Stephen F

    2015-01-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\

  15. Prediction of aromatase inhibitory activity using the efficient linear method (ELM).

    Science.gov (United States)

    Shoombuatong, Watshara; Prachayasittikul, Veda; Prachayasittikul, Virapong; Nantasenamat, Chanin

    2015-01-01

    Aromatase inhibition is an effective treatment strategy for breast cancer. Currently, several in silico methods have been developed for the prediction of aromatase inhibitors (AIs) using artificial neural network (ANN) or support vector machine (SVM). In spite of this, there are ample opportunities for further improvements by developing a simple and interpretable quantitative structure-activity relationship (QSAR) method. Herein, an efficient linear method (ELM) is proposed for constructing a highly predictive QSAR model containing a spontaneous feature importance estimator. Briefly, ELM is a linear-based model with optimal parameters derived from genetic algorithm. Results showed that the simple ELM method displayed robust performance with 10-fold cross-validation MCC values of 0.64 and 0.56 for steroidal and non-steroidal AIs, respectively. Comparative analyses with other machine learning methods (i.e. ANN, SVM and decision tree) were also performed. A thorough analysis of informative molecular descriptors for both steroidal and non-steroidal AIs provided insights into the mechanism of action of compounds. Our findings suggest that the shape and polarizability of compounds may govern the inhibitory activity of both steroidal and non-steroidal types whereas the terminal primary C(sp3) functional group and electronegativity may be required for non-steroidal AIs. The R code of the ELM method is available at http://dx.doi.org/10.6084/m9.figshare.1274030.

  16. Evaluation of the energy efficiency of enzyme fermentation by mechanistic modeling

    DEFF Research Database (Denmark)

    Albaek, Mads O.; Gernaey, Krist V.; Hansen, Morten S.

    2012-01-01

    Modeling biotechnological processes is key to obtaining increased productivity and efficiency. Particularly crucial to successful modeling of such systems is the coupling of the physical transport phenomena and the biological activity in one model. We have applied a model for the expression...... prediction. At different rates of agitation and aeration as well as headspace pressure, we can predict the energy efficiency of oxygen transfer, a key process parameter for economical production of industrial enzymes. An inverse relationship between the productivity and energy efficiency of the process...... was found. This modeling approach can be used by manufacturers to evaluate the enzyme fermentation process for a range of different process conditions with regard to energy efficiency....

  17. Specialized Language Models using Dialogue Predictions

    CERN Document Server

    Popovici, C; Popovici, Cosmin; Baggia, Paolo

    1996-01-01

    This paper analyses language modeling in spoken dialogue systems for accessing a database. The use of several language models obtained by exploiting dialogue predictions gives better results than the use of a single model for the whole dialogue interaction. For this reason several models have been created, each one for a specific system question, such as the request or the confirmation of a parameter. The use of dialogue-dependent language models increases the performance both at the recognition and at the understanding level, especially on answers to system requests. Moreover other methods to increase performance, like automatic clustering of vocabulary words or the use of better acoustic models during recognition, does not affect the improvements given by dialogue-dependent language models. The system used in our experiments is Dialogos, the Italian spoken dialogue system used for accessing railway timetable information over the telephone. The experiments were carried out on a large corpus of dialogues coll...

  18. Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations

    Science.gov (United States)

    Bowman, J.; Jensen, S.; McDonald, Mark

    2010-10-01

    High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.

  19. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  20. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...

  1. ENSO Prediction using Vector Autoregressive Models

    Science.gov (United States)

    Chapman, D. R.; Cane, M. A.; Henderson, N.; Lee, D.; Chen, C.

    2013-12-01

    A recent comparison (Barnston et al, 2012 BAMS) shows the ENSO forecasting skill of dynamical models now exceeds that of statistical models, but the best statistical models are comparable to all but the very best dynamical models. In this comparison the leading statistical model is the one based on the Empirical Model Reduction (EMR) method. Here we report on experiments with multilevel Vector Autoregressive models using only sea surface temperatures (SSTs) as predictors. VAR(L) models generalizes Linear Inverse Models (LIM), which are a VAR(1) method, as well as multilevel univariate autoregressive models. Optimal forecast skill is achieved using 12 to 14 months of prior state information (i.e 12-14 levels), which allows SSTs alone to capture the effects of other variables such as heat content as well as seasonality. The use of multiple levels allows the model advancing one month at a time to perform at least as well for a 6 month forecast as a model constructed to explicitly forecast 6 months ahead. We infer that the multilevel model has fully captured the linear dynamics (cf. Penland and Magorian, 1993 J. Climate). Finally, while VAR(L) is equivalent to L-level EMR, we show in a 150 year cross validated assessment that we can increase forecast skill by improving on the EMR initialization procedure. The greatest benefit of this change is in allowing the prediction to make effective use of information over many more months.

  2. Gas explosion prediction using CFD models

    Energy Technology Data Exchange (ETDEWEB)

    Niemann-Delius, C.; Okafor, E. [RWTH Aachen Univ. (Germany); Buhrow, C. [TU Bergakademie Freiberg Univ. (Germany)

    2006-07-15

    A number of CFD models are currently available to model gaseous explosions in complex geometries. Some of these tools allow the representation of complex environments within hydrocarbon production plants. In certain explosion scenarios, a correction is usually made for the presence of buildings and other complexities by using crude approximations to obtain realistic estimates of explosion behaviour as can be found when predicting the strength of blast waves resulting from initial explosions. With the advance of computational technology, and greater availability of computing power, computational fluid dynamics (CFD) tools are becoming increasingly available for solving such a wide range of explosion problems. A CFD-based explosion code - FLACS can, for instance, be confidently used to understand the impact of blast overpressures in a plant environment consisting of obstacles such as buildings, structures, and pipes. With its porosity concept representing geometry details smaller than the grid, FLACS can represent geometry well, even when using coarse grid resolutions. The performance of FLACS has been evaluated using a wide range of field data. In the present paper, the concept of computational fluid dynamics (CFD) and its application to gas explosion prediction is presented. Furthermore, the predictive capabilities of CFD-based gaseous explosion simulators are demonstrated using FLACS. Details about the FLACS-code, some extensions made to FLACS, model validation exercises, application, and some results from blast load prediction within an industrial facility are presented. (orig.)

  3. Genetic models of homosexuality: generating testable predictions.

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-12-22

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.

  4. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A Study On Distributed Model Predictive Consensus

    CERN Document Server

    Keviczky, Tamas

    2008-01-01

    We investigate convergence properties of a proposed distributed model predictive control (DMPC) scheme, where agents negotiate to compute an optimal consensus point using an incremental subgradient method based on primal decomposition as described in Johansson et al. [2006, 2007]. The objective of the distributed control strategy is to agree upon and achieve an optimal common output value for a group of agents in the presence of constraints on the agent dynamics using local predictive controllers. Stability analysis using a receding horizon implementation of the distributed optimal consensus scheme is performed. Conditions are given under which convergence can be obtained even if the negotiations do not reach full consensus.

  6. Optimal model-free prediction from multivariate time series.

    Science.gov (United States)

    Runge, Jakob; Donner, Reik V; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  7. Model Predictive Control for the Operation of Building Cooling Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Yudong; Borrelli, Francesco; Hencey, Brandon; Coffey, Brian; Bengea, Sorin; Haves, Philip

    2010-06-29

    A model-based predictive control (MPC) is designed for optimal thermal energy storage in building cooling systems. We focus on buildings equipped with a water tank used for actively storing cold water produced by a series of chillers. Typically the chillers are operated at night to recharge the storage tank in order to meet the building demands on the following day. In this paper, we build on our previous work, improve the building load model, and present experimental results. The experiments show that MPC can achieve reduction in the central plant electricity cost and improvement of its efficiency.

  8. NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    R. G. SILVA

    1999-03-01

    Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.

  9. Modeling adaptation of carbon use efficiency in microbial communities

    Directory of Open Access Journals (Sweden)

    Steven D Allison

    2014-10-01

    Full Text Available In new microbial-biogeochemical models, microbial carbon use efficiency (CUE is often assumed to decline with increasing temperature. Under this assumption, soil carbon losses under warming are small because microbial biomass declines. Yet there is also empirical evidence that CUE may adapt (i.e. become less sensitive to warming, thereby mitigating negative effects on microbial biomass. To analyze potential mechanisms of CUE adaptation, I used two theoretical models to implement a tradeoff between microbial uptake rate and CUE. This rate-yield tradeoff is based on thermodynamic principles and suggests that microbes with greater investment in resource acquisition should have lower CUE. Microbial communities or individuals could adapt to warming by reducing investment in enzymes and uptake machinery. Consistent with this idea, a simple analytical model predicted that adaptation can offset 50% of the warming-induced decline in CUE. To assess the ecosystem implications of the rate-yield tradeoff, I quantified CUE adaptation in a spatially-structured simulation model with 100 microbial taxa and 12 soil carbon substrates. This model predicted much lower CUE adaptation, likely due to additional physiological and ecological constraints on microbes. In particular, specific resource acquisition traits are needed to maintain stoichiometric balance, and taxa with high CUE and low enzyme investment rely on low-yield, high-enzyme neighbors to catalyze substrate degradation. In contrast to published microbial models, simulations with greater CUE adaptation also showed greater carbon storage under warming. This pattern occurred because microbial communities with stronger CUE adaptation produced fewer degradative enzymes, despite increases in biomass. Thus the rate-yield tradeoff prevents CUE adaptation from driving ecosystem carbon loss under climate warming.

  10. Performance model to predict overall defect density

    Directory of Open Access Journals (Sweden)

    J Venkatesh

    2012-08-01

    Full Text Available Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.

  11. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  12. Pressure prediction model for compression garment design.

    Science.gov (United States)

    Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q

    2010-01-01

    Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.

  13. Statistical assessment of predictive modeling uncertainty

    Science.gov (United States)

    Barzaghi, Riccardo; Marotta, Anna Maria

    2017-04-01

    When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.

  14. Predictive information speeds up visual awareness in an individuation task by modulating threshold setting, not processing efficiency.

    Science.gov (United States)

    De Loof, Esther; Van Opstal, Filip; Verguts, Tom

    2016-04-01

    Theories on visual awareness claim that predicted stimuli reach awareness faster than unpredicted ones. In the current study, we disentangle whether prior information about the upcoming stimulus affects visual awareness of stimulus location (i.e., individuation) by modulating processing efficiency or threshold setting. Analogous research on stimulus identification revealed that prior information modulates threshold setting. However, as identification and individuation are two functionally and neurally distinct processes, the mechanisms underlying identification cannot simply be extrapolated directly to individuation. The goal of this study was therefore to investigate how individuation is influenced by prior information about the upcoming stimulus. To do so, a drift diffusion model was fitted to estimate the processing efficiency and threshold setting for predicted versus unpredicted stimuli in a cued individuation paradigm. Participants were asked to locate a picture, following a cue that was congruent, incongruent or neutral with respect to the picture's identity. Pictures were individuated faster in the congruent and neutral condition compared to the incongruent condition. In the diffusion model analysis, the processing efficiency was not significantly different across conditions. However, the threshold setting was significantly higher following an incongruent cue compared to both congruent and neutral cues. Our results indicate that predictive information about the upcoming stimulus influences visual awareness by shifting the threshold for individuation rather than by enhancing processing efficiency.

  15. Seasonal Predictability in a Model Atmosphere.

    Science.gov (United States)

    Lin, Hai

    2001-07-01

    The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.

  16. Insufficiencies of the Single Exponential Model and Efficiency of the Double Exponential Model in the Optimization of Solar Cells Efficiency

    OpenAIRE

    A. Zerga; B. Benyoucef; J.-P. Charles

    1998-01-01

    Single and double exponential models are confronted to determine the most adapted model for optimization of solar cells efficiency. It is shown that the single exponential model (SEM) presents some insufficiencies for efficiency optimization. The interest of the double exponential model to optimize the efficiency and to achieve an adequate simulation of the operation of solar cells is demonstrated by means of I-V characteristics plotting.

  17. A kinetic model for predicting biodegradation.

    Science.gov (United States)

    Dimitrov, S; Pavlov, T; Nedelcheva, D; Reuschenbach, P; Silvani, M; Bias, R; Comber, M; Low, L; Lee, C; Parkerton, T; Mekenyan, O

    2007-01-01

    Biodegradation plays a key role in the environmental risk assessment of organic chemicals. The need to assess biodegradability of a chemical for regulatory purposes supports the development of a model for predicting the extent of biodegradation at different time frames, in particular the extent of ultimate biodegradation within a '10 day window' criterion as well as estimating biodegradation half-lives. Conceptually this implies expressing the rate of catabolic transformations as a function of time. An attempt to correlate the kinetics of biodegradation with molecular structure of chemicals is presented. A simplified biodegradation kinetic model was formulated by combining the probabilistic approach of the original formulation of the CATABOL model with the assumption of first order kinetics of catabolic transformations. Nonlinear regression analysis was used to fit the model parameters to OECD 301F biodegradation kinetic data for a set of 208 chemicals. The new model allows the prediction of biodegradation multi-pathways, primary and ultimate half-lives and simulation of related kinetic biodegradation parameters such as biological oxygen demand (BOD), carbon dioxide production, and the nature and amount of metabolites as a function of time. The model may also be used for evaluating the OECD ready biodegradability potential of a chemical within the '10-day window' criterion.

  18. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  19. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  20. Modeling solar cells: A method for improving their efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Morales-Acevedo, Arturo, E-mail: amorales@solar.cinvestav.mx [Centro de Investigacion y de Estudios Avanzados del IPN, Electrical Engineering Department, Avenida IPN No. 2508, 07360 Mexico, D.F. (Mexico); Hernandez-Como, Norberto; Casados-Cruz, Gaspar [Centro de Investigacion y de Estudios Avanzados del IPN, Electrical Engineering Department, Avenida IPN No. 2508, 07360 Mexico, D.F. (Mexico)

    2012-09-20

    After a brief discussion on the theoretical basis for simulating solar cells and the available programs for doing this we proceed to discuss two examples that show the importance of doing numerical simulation of solar cells. We shall concentrate in silicon Heterojunction Intrinsic Thin film aSi/cSi (HIT) and CdS/CuInGaSe{sub 2} (CIGS) solar cells. In the first case, we will show that numerical simulation indicates that there is an optimum transparent conducting oxide (TCO) to be used in contact with the p-type aSi:H emitter layer although many experimental researchers might think that the results can be similar without regard of the TCO film used. In this case, it is shown that high work function TCO materials such as ZnO:Al are much better than smaller work function films such as ITO. HIT solar cells made with small work function TCO layers (<4.8 eV) will never be able to reach the high efficiencies already reported experimentally. It will also be discussed that simulations of CIGS solar cells by different groups predict efficiencies around 18-19% or even less, i.e. below the record efficiency reported experimentally (20.3%). In addition, the experimental band-gap which is optimum in this case is around 1.2 eV while several theoretical results predict a higher optimum band-gap (1.4-1.5 eV). This means that there are other effects not included in most of the simulation models developed until today. One of them is the possible presence of an interfacial (inversion) layer between CdS and CIGS. It is shown that this inversion layer might explain the smaller observed optimum band-gap, but some efficiency is lost. It is discussed that another possible explanation for the higher experimental efficiency is the possible variation of Ga concentration in the CIGS film causing a gradual variation of the band-gap. This band-gap grading might help improve the open-circuit voltage and, if it is appropriately done, it can also cause the enhancement of the photo-current density.

  1. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  2. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik;

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored...... and controlled have thus become essential factors for efficient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona...

  3. Efficient Worst-Case Execution Time Analysis of Dynamic Branch Prediction

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang

    2016-01-01

    Dynamic branch prediction is commonly found in modern processors, but notoriously difficult to model for worst-case execution time analysis. This is particularly true for global dynamic branch predictors, where predictions are influenced by the global branch history. Prior research in this area has...... concluded that modeling of global branch prediction is too costly for practical use. This paper presents an approach to model global branch prediction while keeping the analysis effort reasonably low. The approach separates the branch history analysis from the integer linear programming formulation...... of the worst-case execution time problem. Consequently, the proposed approach scales to longer branch history lengths than previous approaches....

  4. Probabilistic prediction models for aggregate quarry siting

    Science.gov (United States)

    Robinson, G.R.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  5. Models for efficient integration of solar energy

    DEFF Research Database (Denmark)

    Bacher, Peder

    . Finally a procedure for identication of a suitable model for the heat dynamics of a building is presented. The applied models are greybox model based on stochastic dierential equations and the identication is carried out with likelihood ratio tests. The models can be used for providing detailed...

  6. Predicting Footbridge Response using Stochastic Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2013-01-01

    Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing s...... as it pinpoints which decisions to be concerned about when the goal is to predict footbridge response. The studies involve estimating footbridge responses using Monte-Carlo simulations and focus is on estimating vertical structural response to single person loading....

  7. Nonconvex Model Predictive Control for Commercial Refrigeration

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp

    2013-01-01

    is to minimize the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...

  8. Predictive In Vivo Models for Oncology.

    Science.gov (United States)

    Behrens, Diana; Rolff, Jana; Hoffmann, Jens

    2016-01-01

    Experimental oncology research and preclinical drug development both substantially require specific, clinically relevant in vitro and in vivo tumor models. The increasing knowledge about the heterogeneity of cancer requested a substantial restructuring of the test systems for the different stages of development. To be able to cope with the complexity of the disease, larger panels of patient-derived tumor models have to be implemented and extensively characterized. Together with individual genetically engineered tumor models and supported by core functions for expression profiling and data analysis, an integrated discovery process has been generated for predictive and personalized drug development.Improved “humanized” mouse models should help to overcome current limitations given by xenogeneic barrier between humans and mice. Establishment of a functional human immune system and a corresponding human microenvironment in laboratory animals will strongly support further research.Drug discovery, systems biology, and translational research are moving closer together to address all the new hallmarks of cancer, increase the success rate of drug development, and increase the predictive value of preclinical models.

  9. Constructing predictive models of human running.

    Science.gov (United States)

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-06

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  10. Statistical Seasonal Sea Surface based Prediction Model

    Science.gov (United States)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  11. Efficient Calibration of Computationally Intensive Groundwater Models through Surrogate Modelling with Lower Levels of Fidelity

    Science.gov (United States)

    Razavi, S.; Anderson, D.; Martin, P.; MacMillan, G.; Tolson, B.; Gabriel, C.; Zhang, B.

    2012-12-01

    Many sophisticated groundwater models tend to be computationally intensive as they rigorously represent detailed scientific knowledge about the groundwater systems. Calibration (model inversion), which is a vital step of groundwater model development, can require hundreds or thousands of model evaluations (runs) for different sets of parameters and as such demand prohibitively large computational time and resources. One common strategy to circumvent this computational burden is surrogate modelling which is concerned with developing and utilizing fast-to-run surrogates of the original computationally intensive models (also called fine models). Surrogates can be either based on statistical and data-driven models such as kriging and neural networks or simplified physically-based models with lower fidelity to the original system (also called coarse models). Fidelity in this context refers to the degree of the realism of a simulation model. This research initially investigates different strategies for developing lower-fidelity surrogates of a fine groundwater model and their combinations. These strategies include coarsening the fine model, relaxing the numerical convergence criteria, and simplifying the model geological conceptualisation. Trade-offs between model efficiency and fidelity (accuracy) are of special interest. A methodological framework is developed for coordinating the original fine model with its lower-fidelity surrogates with the objective of efficiently calibrating the parameters of the original model. This framework is capable of mapping the original model parameters to the corresponding surrogate model parameters and also mapping the surrogate model response for the given parameters to the original model response. This framework is general in that it can be used with different optimization and/or uncertainty analysis techniques available for groundwater model calibration and parameter/predictive uncertainty assessment. A real-world computationally

  12. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  13. Modeling and Prediction of Hot Deformation Flow Curves

    Science.gov (United States)

    Mirzadeh, Hamed; Cabrera, Jose Maria; Najafizadeh, Abbas

    2012-01-01

    The modeling of hot flow stress and prediction of flow curves for unseen deformation conditions are important in metal-forming processes because any feasible mathematical simulation needs accurate flow description. In the current work, in an attempt to summarize, generalize, and introduce efficient methods, the dynamic recrystallization (DRX) flow curves of a 17-4 PH martensitic precipitation hardening stainless steel, a medium carbon microalloyed steel, and a 304 H austenitic stainless steel were modeled and predicted using (1) a hyperbolic sine equation with strain dependent constants, (2) a developed constitutive equation in a simple normalized stress-normalized strain form and its modified version, and (3) a feed-forward artificial neural network (ANN). These methods were critically discussed, and the ANN technique was found to be the best for the modeling available flow curves; however, the developed constitutive equation showed slightly better performance than that of ANN and significantly better predicted values than those of the hyperbolic sine equation in prediction of flow curves for unseen deformation conditions.

  14. Predictive modeling by the cerebellum improves proprioception.

    Science.gov (United States)

    Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J

    2013-09-04

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.

  15. Effective and efficient model clone detection

    DEFF Research Database (Denmark)

    Störrle, Harald

    2015-01-01

    Code clones are a major source of software defects. Thus, it is likely that model clones (i.e., duplicate fragments of models) have a significant negative impact on model quality, and thus, on any software created based on those models, irrespective of whether the software is generated fully...... automatically (“MDD-style”) or hand-crafted following the blueprint defined by the model (“MBSD-style”). Unfortunately, however, model clones are much less well studied than code clones. In this paper, we present a clone detection algorithm for UML domain models. Our approach covers a much greater variety...... of model types than existing approaches while providing high clone detection rates at high speed....

  16. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  17. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  18. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  19. Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  20. Modeling and Prediction of Krueger Device Noise

    Science.gov (United States)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  1. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  2. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  3. Optimal feedback scheduling of model predictive controllers

    Institute of Scientific and Technical Information of China (English)

    Pingfang ZHOU; Jianying XIE; Xiaolong DENG

    2006-01-01

    Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance, thus obtaining the predictability in time. Optimal feedback scheduling (FS-CBS) of a set of MPC tasks is presented to maximize the global control performance subject to limited processor time. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The constraints in the FSCBS guarantee scheduler of the total task set and stability of each component. The FS-CBS is shown robust against the variation of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.

  4. Objective calibration of numerical weather prediction models

    Science.gov (United States)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  5. Prediction models from CAD models of 3D objects

    Science.gov (United States)

    Camps, Octavia I.

    1992-11-01

    In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.

  6. Model predictive control of MSMPR crystallizers

    Science.gov (United States)

    Moldoványi, Nóra; Lakatos, Béla G.; Szeifert, Ferenc

    2005-02-01

    A multi-input-multi-output (MIMO) control problem of isothermal continuous crystallizers is addressed in order to create an adequate model-based control system. The moment equation model of mixed suspension, mixed product removal (MSMPR) crystallizers that forms a dynamical system is used, the state of which is represented by the vector of six variables: the first four leading moments of the crystal size, solute concentration and solvent concentration. Hence, the time evolution of the system occurs in a bounded region of the six-dimensional phase space. The controlled variables are the mean size of the grain; the crystal size-distribution and the manipulated variables are the input concentration of the solute and the flow rate. The controllability and observability as well as the coupling between the inputs and the outputs was analyzed by simulation using the linearized model. It is shown that the crystallizer is a nonlinear MIMO system with strong coupling between the state variables. Considering the possibilities of the model reduction, a third-order model was found quite adequate for the model estimation in model predictive control (MPC). The mean crystal size and the variance of the size distribution can be nearly separately controlled by the residence time and the inlet solute concentration, respectively. By seeding, the controllability of the crystallizer increases significantly, and the overshoots and the oscillations become smaller. The results of the controlling study have shown that the linear MPC is an adaptable and feasible controller of continuous crystallizers.

  7. An Anisotropic Hardening Model for Springback Prediction

    Science.gov (United States)

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  8. Fruit fly optimization algorithm based high efficiency and low NOx combustion modeling for a boiler

    Institute of Scientific and Technical Information of China (English)

    ZHANG Zhenxing∗; SUN Baomin; XIN Jing

    2014-01-01

    In order to control NOx emissions and enhance boiler efficiency in coal-fired boilers,the thermal operating data from an ultra-supercritical 1 000 MW unit boiler were analyzed.On the basis of the support vector regression machine (SVM),the fruit fly optimization algorithm (FOA)was applied to optimize the penalty parameter C,ker-nel parameter g and insensitive loss coefficient of the model.Then,the FOA-SVM model was established to predict the NOx emissions and boiler efficiency,and the performance of this model was compared with that of the GA-SVM model optimized by genetic algorithm (GA).The results show the FOA-SVM model has better prediction accuracy and generalization capability,of which the maximum average relative error of testing set lies in the NOx emissions model,which is only 3 .5 9%.The above models can predict the NOx emissions and boiler efficiency accurately,so they are very suitable for on-line modeling prediction,which provides a good model foundation for further optimiza-tion operation of large capacity boilers.

  9. The role of pre-morbid intelligence and cognitive reserve in predicting cognitive efficiency in a sample of Italian elderly.

    Science.gov (United States)

    Caffò, Alessandro O; Lopez, Antonella; Spano, Giuseppina; Saracino, Giuseppe; Stasolla, Fabrizio; Ciriello, Giuseppe; Grattagliano, Ignazio; Lancioni, Giulio E; Bosco, Andrea

    2016-12-01

    Models of cognitive reserve in aging suggest that individual's life experience (education, working activity, and leisure) can exert a neuroprotective effect against cognitive decline and may represent an important contribution to successful aging. The objective of the present study is to investigate the role of cognitive reserve, pre-morbid intelligence, age, and education level, in predicting cognitive efficiency in a sample of healthy aged individuals and with probable mild cognitive impairment. Two hundred and eight aging participants recruited from the provincial region of Bari (Apulia, Italy) took part in the study. A battery of standardized tests was administered to them to measure cognitive reserve, pre-morbid intelligence, and cognitive efficiency. Protocols for 10 participants were excluded since they did not meet inclusion criteria, and statistical analyses were conducted on data from the remaining 198 participants. A path analysis was used to test the following model: age, education level, and intelligence directly influence cognitive reserve and cognitive efficiency; cognitive reserve mediates the influence of age, education level, and intelligence on cognitive efficiency. Cognitive reserve fully mediates the relationship between pre-morbid intelligence and education level and cognitive efficiency, while age maintains a direct effect on cognitive efficiency. Cognitive reserve appears to exert a protective effect regarding cognitive decline in normal and pathological populations, thus masking, at least in the early phases of neurodegeneration, the decline of memory, orientation, attention, language, and reasoning skills. The assessment of cognitive reserve may represent a useful evaluation supplement in neuropsychological screening protocols of cognitive decline.

  10. Efficient modelling of droplet dynamics on complex surfaces

    Science.gov (United States)

    Karapetsas, George; Chamakos, Nikolaos T.; Papathanasiou, Athanasios G.

    2016-03-01

    This work investigates the dynamics of droplet interaction with smooth or structured solid surfaces using a novel sharp-interface scheme which allows the efficient modelling of multiple dynamic contact lines. The liquid-gas and liquid-solid interfaces are treated in a unified context and the dynamic contact angle emerges simply due to the combined action of the disjoining and capillary pressure, and viscous stresses without the need of an explicit boundary condition or any requirement for the predefinition of the number and position of the contact lines. The latter, as it is shown, renders the model able to handle interfacial flows with topological changes, e.g. in the case of an impinging droplet on a structured surface. Then it is possible to predict, depending on the impact velocity, whether the droplet will fully or partially impregnate the structures of the solid, or will result in a ‘fakir’, i.e. suspended, state. In the case of a droplet sliding on an inclined substrate, we also demonstrate the built-in capability of our model to provide a prediction for either static or dynamic contact angle hysteresis. We focus our study on hydrophobic surfaces and examine the effect of the geometrical characteristics of the solid surface. It is shown that the presence of air inclusions trapped in the micro-structure of a hydrophobic substrate (Cassie-Baxter state) result in the decrease of contact angle hysteresis and in the increase of the droplet migration velocity in agreement with experimental observations for super-hydrophobic surfaces. Moreover, we perform 3D simulations which are in line with the 2D ones regarding the droplet mobility and also indicate that the contact angle hysteresis may be significantly affected by the directionality of the structures with respect to the droplet motion.

  11. Predictive modelling of ferroelectric tunnel junctions

    Science.gov (United States)

    Velev, Julian P.; Burton, John D.; Zhuravlev, Mikhail Ye; Tsymbal, Evgeny Y.

    2016-05-01

    Ferroelectric tunnel junctions combine the phenomena of quantum-mechanical tunnelling and switchable spontaneous polarisation of a nanometre-thick ferroelectric film into novel device functionality. Switching the ferroelectric barrier polarisation direction produces a sizable change in resistance of the junction—a phenomenon known as the tunnelling electroresistance effect. From a fundamental perspective, ferroelectric tunnel junctions and their version with ferromagnetic electrodes, i.e., multiferroic tunnel junctions, are testbeds for studying the underlying mechanisms of tunnelling electroresistance as well as the interplay between electric and magnetic degrees of freedom and their effect on transport. From a practical perspective, ferroelectric tunnel junctions hold promise for disruptive device applications. In a very short time, they have traversed the path from basic model predictions to prototypes for novel non-volatile ferroelectric random access memories with non-destructive readout. This remarkable progress is to a large extent driven by a productive cycle of predictive modelling and innovative experimental effort. In this review article, we outline the development of the ferroelectric tunnel junction concept and the role of theoretical modelling in guiding experimental work. We discuss a wide range of physical phenomena that control the functional properties of ferroelectric tunnel junctions and summarise the state-of-the-art achievements in the field.

  12. Simple predictions from multifield inflationary models.

    Science.gov (United States)

    Easther, Richard; Frazer, Jonathan; Peiris, Hiranya V; Price, Layne C

    2014-04-25

    We explore whether multifield inflationary models make unambiguous predictions for fundamental cosmological observables. Focusing on N-quadratic inflation, we numerically evaluate the full perturbation equations for models with 2, 3, and O(100) fields, using several distinct methods for specifying the initial values of the background fields. All scenarios are highly predictive, with the probability distribution functions of the cosmological observables becoming more sharply peaked as N increases. For N=100 fields, 95% of our Monte Carlo samples fall in the ranges ns∈(0.9455,0.9534), α∈(-9.741,-7.047)×10-4, r∈(0.1445,0.1449), and riso∈(0.02137,3.510)×10-3 for the spectral index, running, tensor-to-scalar ratio, and isocurvature-to-adiabatic ratio, respectively. The expected amplitude of isocurvature perturbations grows with N, raising the possibility that many-field models may be sensitive to postinflationary physics and suggesting new avenues for testing these scenarios.

  13. Information, complexity and efficiency: The automobile model

    Energy Technology Data Exchange (ETDEWEB)

    Allenby, B. [Lucent Technologies (United States)]|[Lawrence Livermore National Lab., CA (United States)

    1996-08-08

    The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.

  14. Removal efficiency calculated beforehand: QSAR enabled predictions for nanofiltration and advanced oxidation

    NARCIS (Netherlands)

    Vries, D; Wols, B.A.; de Voogt, P.

    2013-01-01

    The efficiency of water treatment systems in removing emerging (chemical) substances is often unknown. Consequently, the prediction of the removal of contaminants in the treatment and supply chain of drinking water is of great interest. By collecting and processing existing chemical properties of

  15. Removal efficiency calculated beforehand: QSAR enabled predictions for nanofiltration and advanced oxidation

    NARCIS (Netherlands)

    Vries, D; Wols, B.A.; de Voogt, P.

    2013-01-01

    The efficiency of water treatment systems in removing emerging (chemical) substances is often unknown. Consequently, the prediction of the removal of contaminants in the treatment and supply chain of drinking water is of great interest. By collecting and processing existing chemical properties of co

  16. Removal efficiency calculated beforehand: QSAR enabled predictions for nanofiltration and advanced oxidation

    NARCIS (Netherlands)

    Vries, D; Wols, B.A.; de Voogt, P.

    2013-01-01

    The efficiency of water treatment systems in removing emerging (chemical) substances is often unknown. Consequently, the prediction of the removal of contaminants in the treatment and supply chain of drinking water is of great interest. By collecting and processing existing chemical properties of co

  17. Predictions of models for environmental radiological assessment

    Energy Technology Data Exchange (ETDEWEB)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa, E-mail: suelip@ird.gov.br, E-mail: dejanira@irg.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Servico de Avaliacao de Impacto Ambiental, Rio de Janeiro, RJ (Brazil); Mahler, Claudio Fernando [Coppe. Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa de Engenharia, Universidade Federal do Rio de Janeiro (UFRJ) - Programa de Engenharia Civil, RJ (Brazil)

    2011-07-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for {sup 137}Cs and {sup 60}Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  18. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained......The primary structure of a protein is the sequence of its amino acids. The secondary structure describes structural properties of the molecule such as which parts of it form sheets, helices or coils. Spacial and other properties are described by the higher order structures. The classification task...

  19. A Modified Model Predictive Control Scheme

    Institute of Scientific and Technical Information of China (English)

    Xiao-Bing Hu; Wen-Hua Chen

    2005-01-01

    In implementations of MPC (Model Predictive Control) schemes, two issues need to be addressed. One is how to enlarge the stability region as much as possible. The other is how to guarantee stability when a computational time limitation exists. In this paper, a modified MPC scheme for constrained linear systems is described. An offline LMI-based iteration process is introduced to expand the stability region. At the same time, a database of feasible control sequences is generated offline so that stability can still be guaranteed in the case of computational time limitations. Simulation results illustrate the effectiveness of this new approach.

  20. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....

  1. Critical conceptualism in environmental modeling and prediction.

    Science.gov (United States)

    Christakos, G

    2003-10-15

    Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river.

  2. Evaluation of Spatial Agreement of Distinct Landslide Prediction Models

    Science.gov (United States)

    Sterlacchini, Simone; Bordogna, Gloria; Frigerio, Ivan

    2013-04-01

    The aim of the study was to assess the degree of spatial agreement of different predicted patterns in a majority of coherent landslide prediction maps with almost similar success and prediction rate curves. If two or more models have a similar performance, the choice of the best one is not a trivial operation and cannot be based on success and prediction rate curves only. In fact, it may happen that two or more prediction maps with similar accuracy and predictive power do not have the same degree of agreement in terms of spatial predicted patterns. The selected study area is the high Valtellina valley, in North of Italy, covering a surface of about 450 km2 where mapping of historical landslides is available. In order to assess landslide susceptibility, we applied the Weights of Evidence (WofE) modeling technique implemented by USGS by means of ARC-SDM tool. WofE efficiently investigate the spatial relationships among past events and multiple predisposing factors, providing useful information to identify the most probable location of future landslide occurrences. We have carried out 13 distinct experiments by changing the number of morphometric and geo-environmental explanatory variables in each experiment with the same training set and thus generating distinct models of landslide prediction, computing probability degrees of occurrence of landslides in each pixel. Expert knowledge and previous results from indirect statistically-based methods suggested slope, land use, and geology the best "driving controlling factors". The Success Rate Curve (SRC) was used to estimate how much the results of each model fit the occurrence of landslides used for the training of the models. The Prediction Rate Curve (PRC) was used to estimate how much the model predict the occurrence of landslides in the validation set. We found that the performances were very similar for different models. Also the dendrogram of the Cohen's kappa statistic and Principal Component Analysis (PCA) were

  3. Survival model construction guided by fit and predictive strength.

    Science.gov (United States)

    Chauvel, Cécile; O'Quigley, John

    2016-10-05

    Survival model construction can be guided by goodness-of-fit techniques as well as measures of predictive strength. Here, we aim to bring together these distinct techniques within the context of a single framework. The goal is how to best characterize and code the effects of the variables, in particular time dependencies, when taken either singly or in combination with other related covariates. Simple graphical techniques can provide an immediate visual indication as to the goodness-of-fit but, in cases of departure from model assumptions, will point in the direction of a more involved and richer alternative model. These techniques appear to be intuitive. This intuition is backed up by formal theorems that underlie the process of building richer models from simpler ones. Measures of predictive strength are used in conjunction with these goodness-of-fit techniques and, again, formal theorems show that these measures can be used to help identify models closest to the unknown non-proportional hazards mechanism that we can suppose generates the observations. Illustrations from studies in breast cancer show how these tools can be of help in guiding the practical problem of efficient model construction for survival data.

  4. Resonant circuit model for efficient metamaterial absorber.

    Science.gov (United States)

    Sellier, Alexandre; Teperik, Tatiana V; de Lustrac, André

    2013-11-04

    The resonant absorption in a planar metamaterial is studied theoretically. We present a simple physical model describing this phenomenon in terms of equivalent resonant circuit. We discuss the role of radiative and dissipative damping of resonant mode supported by a metamaterial in the formation of absorption spectra. We show that the results of rigorous calculations of Maxwell equations can be fully retrieved with simple model describing the system in terms of equivalent resonant circuit. This simple model allows us to explain the total absorption effect observed in the system on a common physical ground by referring it to the impedance matching condition at the resonance.

  5. Automated Irrigation System using Weather Prediction for Efficient Usage of Water Resources

    Science.gov (United States)

    Susmitha, A.; Alakananda, T.; Apoorva, M. L.; Ramesh, T. K.

    2017-08-01

    In agriculture the major problem which farmers face is the water scarcity, so to improve the usage of water one of the irrigation system using drip irrigation which is implemented is “Automated irrigation system with partition facility for effective irrigation of small scale farms” (AISPF). But this method has some drawbacks which can be improved and here we are with a method called “Automated irrigation system using weather prediction for efficient usage of water resources’ (AISWP), it solves the shortcomings of AISPF process. AISWP method helps us to use the available water resources more efficiently by sensing the moisture present in the soil and apart from that it is actually predicting the weather by sensing two parameters temperature and humidity thereby processing the measured values through an algorithm and releasing the water accordingly which is an added feature of AISWP so that water can be efficiently used.

  6. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  7. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euro per km per year [1]. Aiming to reduce such maintenance expenditure, this paper...... presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  8. Dinucleotide controlled null models for comparative RNA gene prediction

    Directory of Open Access Journals (Sweden)

    Gesell Tanja

    2008-05-01

    Full Text Available Abstract Background Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. Results We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. Conclusion SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require

  9. Cell population structure prior to bifurcation predicts efficiency of directed differentiation in human induced pluripotent cells

    Science.gov (United States)

    Bargaje, Rhishikesh; Trachana, Kalliopi; Shelton, Martin N.; McGinnis, Christopher S.; Zhou, Joseph X.; Chadick, Cora; Cook, Savannah; Cavanaugh, Christopher; Huang, Sui; Hood, Leroy

    2017-01-01

    Steering the differentiation of induced pluripotent stem cells (iPSCs) toward specific cell types is crucial for patient-specific disease modeling and drug testing. This effort requires the capacity to predict and control when and how multipotent progenitor cells commit to the desired cell fate. Cell fate commitment represents a critical state transition or “tipping point” at which complex systems undergo a sudden qualitative shift. To characterize such transitions during iPSC to cardiomyocyte differentiation, we analyzed the gene expression patterns of 96 developmental genes at single-cell resolution. We identified a bifurcation event early in the trajectory when a primitive streak-like cell population segregated into the mesodermal and endodermal lineages. Before this branching point, we could detect the signature of an imminent critical transition: increase in cell heterogeneity and coordination of gene expression. Correlation analysis of gene expression profiles at the tipping point indicates transcription factors that drive the state transition toward each alternative cell fate and their relationships with specific phenotypic readouts. The latter helps us to facilitate small molecule screening for differentiation efficiency. To this end, we set up an analysis of cell population structure at the tipping point after systematic variation of the protocol to bias the differentiation toward mesodermal or endodermal cell lineage. We were able to predict the proportion of cardiomyocytes many days before cells manifest the differentiated phenotype. The analysis of cell populations undergoing a critical state transition thus affords a tool to forecast cell fate outcomes and can be used to optimize differentiation protocols to obtain desired cell populations. PMID:28167799

  10. Explicit Nonlinear Model Predictive Control Theory and Applications

    CERN Document Server

    Grancharova, Alexandra

    2012-01-01

    Nonlinear Model Predictive Control (NMPC) has become the accepted methodology to solve complex control problems related to process industries. The main motivation behind explicit NMPC is that an explicit state feedback law avoids the need for executing a numerical optimization algorithm in real time. The benefits of an explicit solution, in addition to the efficient on-line computations, include also verifiability of the implementation and the possibility to design embedded control systems with low software and hardware complexity. This book considers the multi-parametric Nonlinear Programming (mp-NLP) approaches to explicit approximate NMPC of constrained nonlinear systems, developed by the authors, as well as their applications to various NMPC problem formulations and several case studies. The following types of nonlinear systems are considered, resulting in different NMPC problem formulations: Ø  Nonlinear systems described by first-principles models and nonlinear systems described by black-box models; �...

  11. A Coupled Probabilistic Wake Vortex and Aircraft Response Prediction Model

    Science.gov (United States)

    Gloudemans, Thijs; Van Lochem, Sander; Ras, Eelco; Malissa, Joel; Ahmad, Nashat N.; Lewis, Timothy A.

    2016-01-01

    Wake vortex spacing standards along with weather and runway occupancy time, restrict terminal area throughput and impose major constraints on the overall capacity and efficiency of the National Airspace System (NAS). For more than two decades, the National Aeronautics and Space Administration (NASA) has been conducting research on characterizing wake vortex behavior in order to develop fast-time wake transport and decay prediction models. It is expected that the models can be used in the systems level design of advanced air traffic management (ATM) concepts that safely increase the capacity of the NAS. It is also envisioned that at a later stage of maturity, these models could potentially be used operationally, in groundbased spacing and scheduling systems as well as on the flight deck.

  12. A predictive fitness model for influenza

    Science.gov (United States)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  13. Predictive Model of Radiative Neutrino Masses

    CERN Document Server

    Babu, K S

    2013-01-01

    We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z_4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: The hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with \\delta_{CP} = \\pi; and the effective mass in neutrinoless double beta decay lies in a narrow range, m_{\\beta \\beta} = (17.6 - 18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tan\\beta, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The non-standard neutral Higgs bosons, if t...

  14. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...

  15. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  16. EFFICIENCY OF DECISION TREES IN PREDICTING STUDENT’S ACADEMIC PERFORMANCE

    Directory of Open Access Journals (Sweden)

    S. Anupama Kumar

    2011-07-01

    Full Text Available Educational data mining is used to study the data available in the educational field and bring out the hidden knowledge from it. Classification methods like decision trees, rule mining, Bayesian network etc can be applied on the educational data for predicting the students behavior, performance in examination etc. This prediction will help the tutors to identify the weak students and help them to score better marks. The C4.5 decision tree algorithm is applied on student’s internal assessment data to predict their performance in the final exam. The outcome of the decision tree predicted the number of students who are likely to fail or pass. The result is given to the tutor and steps were taken to improve the performance of the students who were predicted to fail. After the declaration of the results in the final examination the marks obtained by the students are fed into the system and the results were analyzed. The comparative analysis of the results states that the prediction has helped the weaker students to improve and brought out betterment in the result. To analyse the accuracy of the algorithm, it is compared with ID3 algorithm and found to be more efficient in terms of the accurately predicting the outcome of the student and time taken to derive the tree. Educational data mining is used to study the data available in the educational field and bring out the hidden knowledge from it. Classification methods like decision trees, rule mining, Bayesian network etc can be applied on the educational data for predicting the students behavior, performance in examination etc. This prediction will help the tutors to identify the weak students and help them to score better marks. The C4.5 decision tree algorithm is applied on student’s internal assessment data to predict their performance in the final exam. The outcome of the decision tree predicted the number of students who are likely to fail or pass. The result is given to the tutor and steps

  17. Model Predictive Control-Based Fast Charging for Vehicular Batteries

    Directory of Open Access Journals (Sweden)

    Zhibin Song

    2011-08-01

    Full Text Available Battery fast charging is one of the most significant and difficult techniques affecting the commercialization of electric vehicles (EVs. In this paper, we propose a fast charge framework based on model predictive control, with the aim of simultaneously reducing the charge duration, which represents the out-of-service time of vehicles, and the increase in temperature, which represents safety and energy efficiency during the charge process. The RC model is employed to predict the future State of Charge (SOC. A single mode lumped-parameter thermal model and a neural network trained by real experimental data are also applied to predict the future temperature in simulations and experiments respectively. A genetic algorithm is then applied to find the best charge sequence under a specified fitness function, which consists of two objectives: minimizing the charging duration and minimizing the increase in temperature. Both simulation and experiment demonstrate that the Pareto front of the proposed method dominates that of the most popular constant current constant voltage (CCCV charge method.

  18. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...

  19. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    of the selectivities of the constituent predicates. However, this independence assumption is more often than not wrong, and is considered to be the most common cause of sub-optimal query execution plans chosen by modern query optimizers. We take a step towards a principled and practical approach to performing...... cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss......Query optimizers rely on statistical models that succinctly describe the underlying data. Models are used to derive cardinality estimates for intermediate relations, which in turn guide the optimizer to choose the best query execution plan. The quality of the resulting plan is highly dependent...

  20. Efficient Modelling Methodology for Reconfigurable Underwater Robots

    DEFF Research Database (Denmark)

    Nielsen, Mikkel Cornelius; Blanke, Mogens; Schjølberg, Ingrid

    2016-01-01

    This paper considers the challenge of applying reconfigurable robots in an underwater environment. The main result presented is the development of a model for a system comprised of N, possibly heterogeneous, robots dynamically connected to each other and moving with 6 Degrees of Freedom (DOF......). This paper presents an application of the Udwadia-Kalaba Equation for modelling the Reconfigurable Underwater Robots. The constraints developed to enforce the rigid connection between robots in the system is derived through restrictions on relative distances and orientations. To avoid singularities...

  1. Efficient Modelling Methodology for Reconfigurable Underwater Robots

    DEFF Research Database (Denmark)

    Nielsen, Mikkel Cornelius; Blanke, Mogens; Schjølberg, Ingrid

    2016-01-01

    This paper considers the challenge of applying reconfigurable robots in an underwater environment. The main result presented is the development of a model for a system comprised of N, possibly heterogeneous, robots dynamically connected to each other and moving with 6 Degrees of Freedom (DOF......). This paper presents an application of the Udwadia-Kalaba Equation for modelling the Reconfigurable Underwater Robots. The constraints developed to enforce the rigid connection between robots in the system is derived through restrictions on relative distances and orientations. To avoid singularities...... in the orientation and, thereby, allow the robots to undertake any relative configuration the attitude is represented in Euler parameters....

  2. Model Predictive Control of a Continuous Vacuum Crystalliser in an Industrial Environment: A Feasibility Study

    OpenAIRE

    Moldoványi, N.; Abonyi, J.

    2009-01-01

    Crystallisers are essentially multivariable systems with high interaction amongst the process variables. Model Predictive Controllers (MPC) can handle such highly interacting multivariable systems efficiently due to their coordinated approach. In the absence of a real continuous crystalliser, a detailed momentum-model was applied using the process simulator in Simulink. This process has been controlled by a model predictive controller widely used in industry. A new framework has been worke...

  3. Efficient Modelling and Generation of Markov Automata

    NARCIS (Netherlands)

    Timmer, Mark; Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    2012-01-01

    This presentation introduces a process-algebraic framework with data for modelling and generating Markov automata. We show how an existing linearisation procedure for process-algebraic representations of probabilistic automata can be reused to transform systems in our new framework to a special

  4. Multitask Efficiencies in the Decision Tree Model

    CERN Document Server

    Drucker, Andrew

    2008-01-01

    In Direct Sum problems [KRW], one tries to show that for a given computational model, the complexity of computing a collection $F = \\{f_i\\}$ of functions on independent inputs is approximately the sum of their individual complexities. In this paper, by contrast, we study the diversity of ways in which the joint computational complexity can behave when all the $f_i$ are evaluated on a \\textit{common} input. Fixing some model of computational cost, let $C_F(X): \\{0, 1\\}^l \\to \\mathbf{R}$ give the cost of computing the subcollection $\\{f_i(x): X_i = 1\\}$, on common input $x$. What constraints do the functions $C_F(X)$ obey, when $F$ is chosen freely? $C_F(X)$ will, for reasonable models, obey nonnegativity, monotonicity, and subadditivity. We show that, in the deterministic, adaptive query model, these are `essentially' the only constraints: for any function $C(X)$ obeying these properties and any $\\epsilon > 0$, there exists a family $F$ of boolean functions and a $T > 0$ such that for all $X \\in \\{0, 1\\}^l$, \\...

  5. Efficient CSL Model Checking Using Stratification

    DEFF Research Database (Denmark)

    Zhang, Lijun; Jansen, David N.; Nielson, Flemming;

    2012-01-01

    For continuous-time Markov chains, the model-checking problem with respect to continuous-time stochastic logic (CSL) has been introduced and shown to be decidable by Aziz, Sanwal, Singhal and Brayton in 1996 [ 1, 2]. Their proof can be turned into an approximation algorithm with worse than expone...

  6. Efficient Modelling Methodology for Reconfigurable Underwater Robots

    DEFF Research Database (Denmark)

    Nielsen, Mikkel Cornelius; Blanke, Mogens; Schjølberg, Ingrid

    2016-01-01

    This paper considers the challenge of applying reconfigurable robots in an underwater environment. The main result presented is the development of a model for a system comprised of N, possibly heterogeneous, robots dynamically connected to each other and moving with 6 Degrees of Freedom (DOF...

  7. Business models for material efficiency services. Conceptualization and application

    Energy Technology Data Exchange (ETDEWEB)

    Halme, Minna; Anttonen, Markku; Kuisma, Mika; Kontoniemi, Nea [Helsinki School of Economics, Department of Marketing and Management, P.O. Box 1210, 00101 Helsinki (Finland); Heino, Erja [University of Helsinki, Department of Biological and Environmental Sciences (Finland)

    2007-06-15

    Despite the abundant research on material flows and the growing recognition of the need to dematerialize the economy, business enterprises are still not making the best possible use of the many opportunities for material efficiency improvements. This article proposes one possible solution: material efficiency services provided by outside suppliers. It also introduces a conceptual framework for the analysis of different business models for eco-efficient services and applies the framework to material efficiency services. Four business models are outlined and their feasibility is studied from an empirical vantage point. In contrast to much of the previous research, special emphasis is laid on the financial aspects. It appears that the most promising business models are 'material efficiency as additional service' and 'material flow management service'. Depending on the business model, prominent material efficiency service providers differ from large companies that offer multiple products and/or services to smaller, specialized providers. Potential clients (users) typically lack the resources (expertise, management's time or initial funds) to conduct material efficiency improvements themselves. Customers are more likely to use material efficiency services that relate to support materials or side-streams rather than those that are at the core of production. Potential client organizations with a strategy of outsourcing support activities and with experience of outsourcing are more keen to use material efficiency services. (author)

  8. Efficient vector hysteresis modeling using rotationally coupled step functions

    Energy Technology Data Exchange (ETDEWEB)

    Adly, A.A., E-mail: adlyamr@gmail.com; Abd-El-Hafiz, S.K., E-mail: sabdelhafiz@gmail.com

    2012-05-01

    Vector hysteresis models are usually used as sub-modules of field computation software tools. When dealing with a massive field computation problem, computational efficiency and practicality of such models become crucial. In this paper, generalization of a recently proposed computationally efficient vector hysteresis model based upon interacting step functions is presented. More specifically, the model is generalized to cover vector hysteresis modeling of both isotropic and anisotropic magnetic media. Model configuration details as well as experimental testing and simulation results are given in the paper.

  9. Modelación de episodios críticos de contaminación por material particulado (PM10 en Santiago de Chile: Comparación de la eficiencia predictiva de los modelos paramétricos y no paramétricos Modeling critical episodes of air pollution by PM10 in Santiago, Chile: Comparison of the predictive efficiency of parametric and non-parametric statistical models

    Directory of Open Access Journals (Sweden)

    Sergio A. Alvarado

    2010-12-01

    Full Text Available Objetivo: Evaluar la eficiencia predictiva de modelos estadísticos paramétricos y no paramétricos para predecir episodios críticos de contaminación por material particulado PM10 del día siguiente, que superen en Santiago de Chile la norma de calidad diaria. Una predicción adecuada de tales episodios permite a la autoridad decretar medidas restrictivas que aminoren la gravedad del episodio, y consecuentemente proteger la salud de la comunidad. Método: Se trabajó con las concentraciones de material particulado PM10 registradas en una estación asociada a la red de monitorización de la calidad del aire MACAM-2, considerando 152 observaciones diarias de 14 variables, y con información meteorológica registrada durante los años 2001 a 2004. Se ajustaron modelos estadísticos paramétricos Gamma usando el paquete estadístico STATA v11, y no paramétricos usando una demo del software estadístico MARS v 2.0 distribuida por Salford-Systems. Resultados: Ambos métodos de modelación presentan una alta correlación entre los valores observados y los predichos. Los modelos Gamma presentan mejores aciertos que MARS para las concentraciones de PM10 con valores Objective: To evaluate the predictive efficiency of two statistical models (one parametric and the other non-parametric to predict critical episodes of air pollution exceeding daily air quality standards in Santiago, Chile by using the next day PM10 maximum 24h value. Accurate prediction of such episodes would allow restrictive measures to be applied by health authorities to reduce their seriousness and protect the community´s health. Methods: We used the PM10 concentrations registered by a station of the Air Quality Monitoring Network (152 daily observations of 14 variables and meteorological information gathered from 2001 to 2004. To construct predictive models, we fitted a parametric Gamma model using STATA v11 software and a non-parametric MARS model by using a demo version of Salford

  10. Two criteria for evaluating risk prediction models.

    Science.gov (United States)

    Pfeiffer, R M; Gail, M H

    2011-09-01

    We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.

  11. Investigating market efficiency through a forecasting model based on differential equations

    Science.gov (United States)

    de Resende, Charlene C.; Pereira, Adriano C. M.; Cardoso, Rodrigo T. N.; de Magalhães, A. R. Bosco

    2017-05-01

    A new differential equation based model for stock price trend forecast is proposed as a tool to investigate efficiency in an emerging market. Its predictive power showed statistically to be higher than the one of a completely random model, signaling towards the presence of arbitrage opportunities. Conditions for accuracy to be enhanced are investigated, and application of the model as part of a trading strategy is discussed.

  12. Efficient finite element modeling of elastodynamic scattering with non-reflecting boundary conditions

    Science.gov (United States)

    Velichko, A.; Wilcox, P. D.

    2012-05-01

    An efficient technique for predicting the complete scattering behavior for an arbitrarily-shaped scatterer is presented. The spatial size of the modeling domain around the scatterer is as small as possible to minimize computational expense and a minimum number of models are executed. This model uses non-reflecting boundary conditions on the surface surrounding the scatterer which are non-local in space. Example results for 2D and 3D scattering in isotropic material and guided wave scattering are presented.

  13. Methods for Handling Missing Variables in Risk Prediction Models

    NARCIS (Netherlands)

    Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.

    2016-01-01

    Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient

  14. Prediction of gas collection efficiency and particle collection artifact for atmospheric semivolatile organic compounds in multicapillary denuders.

    Science.gov (United States)

    Rowe, Mark D; Perlinger, Judith A

    2010-01-15

    A modeling approach is presented to predict the sorptive sampling collection efficiency of gaseous semivolatile organic compounds (SOCs) and the artifact caused by collection of particle-associated SOCs in multicapillary diffusion denuders containing polydimethylsiloxane (PDMS) stationary phase. Approaches are presented to estimate the equilibrium PDMS-gas partition coefficient (K(pdms)) from a solvation parameter model for any compound, and, for nonpolar compounds, from the octanol-air partition coefficient (K(oa)) if measured K(pdms) values are not available. These estimated K(pdms) values are compared with K(pdms) measured by gas chromatography. Breakthrough fraction was measured for SOCs collected from ambient air using high-flow (300 L min(-1)) and low-flow (13 L min(-1)) denuders under a range of sampling conditions (-10 to 25 degrees C; 11-100% relative humidity). Measured breakthrough fraction agreed with predictions based on frontal chromatography theory using K(pdms) and equations of Golay, Lövkvist and Jönsson within measurement precision. Analytes included hexachlorobenzene, 144 polychlorinated biphenyl congeners, and polybrominated diphenyl ethers 47 and 99. Atmospheric particle transmission efficiency was measured for the high-flow denuder (0.037-6.3 microm diameter), and low-flow denuder (0.015-3.1 microm diameter). Particle transmission predicted using equations of Gormley and Kennedy, Pich, and a modified filter model, agreed within measurement precision (high-flow denuder) or were slightly greater than (low-flow denuder) measured particle transmission. As an example application of the model, breakthrough volume and particle collection artifact for the two denuder designs were predicted as a function of K(oa) for nonpolar SOCs. The modeling approach is a necessary tool for the design and use of denuders for sorptive sampling with PDMS stationary phase.

  15. An infinitesimal model for quantitative trait genomic value prediction.

    Directory of Open Access Journals (Sweden)

    Zhiqiu Hu

    Full Text Available We developed a marker based infinitesimal model for quantitative trait analysis. In contrast to the classical infinitesimal model, we now have new information about the segregation of every individual locus of the entire genome. Under this new model, we propose that the genetic effect of an individual locus is a function of the genome location (a continuous quantity. The overall genetic value of an individual is the weighted integral of the genetic effect function along the genome. Numerical integration is performed to find the integral, which requires partitioning the entire genome into a finite number of bins. Each bin may contain many markers. The integral is approximated by the weighted sum of all the bin effects. We now turn the problem of marker analysis into bin analysis so that the model dimension has decreased from a virtual infinity to a finite number of bins. This new approach can efficiently handle virtually unlimited number of markers without marker selection. The marker based infinitesimal model requires high linkage disequilibrium of all markers within a bin. For populations with low or no linkage disequilibrium, we develop an adaptive infinitesimal model. Both the original and the adaptive models are tested using simulated data as well as beef cattle data. The simulated data analysis shows that there is always an optimal number of bins at which the predictability of the bin model is much greater than the original marker analysis. Result of the beef cattle data analysis indicates that the bin model can increase the predictability from 10% (multiple marker analysis to 33% (multiple bin analysis. The marker based infinitesimal model paves a way towards the solution of genetic mapping and genomic selection using the whole genome sequence data.

  16. Infiltration under snow cover: Modeling approaches and predictive uncertainty

    Science.gov (United States)

    Meeks, Jessica; Moeck, Christian; Brunner, Philip; Hunkeler, Daniel

    2017-03-01

    method. Further, our study demonstrated that an uncertainty analysis of model predictions is easily accomplished due to the low computational demand of the models and efficient calibration software and is absolutely worth the additional investment. Lastly, development of a systematic instrumentation that evaluates the distributed, temporal evolution of snowpack drainage is vital for optimal understanding and management of cold-climate hydrologic systems.

  17. An Efficient Virtual Trachea Deformation Model

    Directory of Open Access Journals (Sweden)

    Cui Tong

    2016-01-01

    Full Text Available In this paper, we present a virtual tactile model with the physically based skeleton to simulate force and deformation between a rigid tool and the soft organ. When the virtual trachea is handled, a skeleton model suitable for interactive environments is established, which consists of ligament layers, cartilage rings and muscular bars. In this skeleton, the contact force goes through the ligament layer, and produces the load effects of the joints , which are connecting the ligament layer and cartilage rings. Due to the nonlinear shape deformation inside the local neighbourhood of a contact region, the RBF method is applied to modify the result of linear global shape deformation by adding the nonlinear effect inside. Users are able to handle the virtual trachea, and the results from the examples with the mechanical properties of the human trachea are given to demonstrate the effectiveness of the approach.

  18. A Pedestrian Approach to Indoor Temperature Distribution Prediction of a Passive Solar Energy Efficient House

    Directory of Open Access Journals (Sweden)

    Golden Makaka

    2015-01-01

    Full Text Available With the increase in energy consumption by buildings in keeping the indoor environment within the comfort levels and the ever increase of energy price there is need to design buildings that require minimal energy to keep the indoor environment within the comfort levels. There is need to predict the indoor temperature during the design stage. In this paper a statistical indoor temperature prediction model was developed. A passive solar house was constructed; thermal behaviour was simulated using ECOTECT and DOE computer software. The thermal behaviour of the house was monitored for a year. The indoor temperature was observed to be in the comfort level for 85% of the total time monitored. The simulation results were compared with the measured results and those from the prediction model. The statistical prediction model was found to agree (95% with the measured results. Simulation results were observed to agree (96% with the statistical prediction model. Modeled indoor temperature was most sensitive to the outdoor temperatures variations. The daily mean peak ones were found to be more pronounced in summer (5% than in winter (4%. The developed model can be used to predict the instantaneous indoor temperature for a specific house design.

  19. Efficient Electromagnetic Modelling of Complex Structures

    OpenAIRE

    Tobon Vasquez, Jorge Alberto

    2014-01-01

    Part 1. Space vehicles re-entering earth's atmosphere produce a shock wave which in turns results in a bow of plasma around the vehicle body. This plasma signicantly affects all radio links between the vehicle and ground, since the electron plasma frequency reaches beyond several GHz. In this work, a model of the propagation in plasma issue is developed. The radiofrequency propagation from/to antennae installed aboard the vehicle to the ground stations (or Data Relay Satellites) can be predic...

  20. Efficient Smoothing for Boundary Value Models

    Science.gov (United States)

    1989-12-29

    IEEE Transactions on Automatic Control , vol. 29, pp. 803-821, 1984. [2] A. Bagchi and H. Westdijk, "Smoothing...and likelihood ratio for Gaussian boundary value processes," IEEE Transactions on Automatic Control , vol. 34, pp. 954-962, 1989. [3] R. Nikoukhah et...77-96, 1988. [6] H. L. Weinert and U. B. Desai, "On complementary models and fixed- interval smoothing," IEEE Transactions on Automatic Control ,

  1. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    Science.gov (United States)

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  2. Modelling the effect of supplementary lighting on production and light utilisation efficiency of greenhouse crops.

    NARCIS (Netherlands)

    Koning, de J.C.M.

    1997-01-01

    The effect of supplementary lighting (SL) on dry matter production of greenhouse crops is predictable with ALSIM, a new crop growth model based on SUCROS87. The light utilization efficiency (LUE), defined as daily dry matter production divided by the daily photosynthetic photon flux is a parameter f

  3. Supplementary Material for: DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail

    2016-01-01

    Abstract Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational methods for the accurate prediction of potential DTIs. To-date, many computational methods have been proposed for this purpose, but they suffer the drawback of a high rate of false positive predictions. Results Here, we developed a novel computational DTI prediction method, DASPfind. DASPfind uses simple paths of particular lengths inferred from a graph that describes DTIs, similarities between drugs, and similarities between the protein targets of drugs. We show that on average, over the four gold standard DTI datasets, DASPfind significantly outperforms other existing methods when the single top-ranked predictions are considered, resulting in 46.17 % of these predictions being correct, and it achieves 49.22 % correct single top ranked predictions when the set of all DTIs for a single drug is tested. Furthermore, we demonstrate that our method is best suited for predicting DTIs in cases of drugs with no known targets or with few known targets. We also show the practical use of DASPfind by generating novel predictions for the Ion Channel dataset and validating them manually. Conclusions DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery

  4. Estimating the magnitude of prediction uncertainties for the APLE model

    Science.gov (United States)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...

  5. Microcellular propagation prediction model based on an improved ray tracing algorithm.

    Science.gov (United States)

    Liu, Z-Y; Guo, L-X; Fan, T-Q

    2013-11-01

    Two-dimensional (2D)/two-and-one-half-dimensional ray tracing (RT) algorithms for the use of the uniform theory of diffraction and geometrical optics are widely used for channel prediction in urban microcellular environments because of their high efficiency and reliable prediction accuracy. In this study, an improved RT algorithm based on the "orientation face set" concept and on the improved 2D polar sweep algorithm is proposed. The goal is to accelerate point-to-point prediction, thereby making RT prediction attractive and convenient. In addition, the use of threshold control of each ray path and the handling of visible grid points for reflection and diffraction sources are adopted, resulting in an improved efficiency of coverage prediction over large areas. Measured results and computed predictions are also compared for urban scenarios. The results indicate that the proposed prediction model works well and is a useful tool for microcellular communication applications.

  6. Prediction of Catastrophes: an experimental model

    CERN Document Server

    Peters, Randall D; Pomeau, Yves

    2012-01-01

    Catastrophes of all kinds can be roughly defined as short duration-large amplitude events following and followed by long periods of "ripening". Major earthquakes surely belong to the class of 'catastrophic' events. Because of the space-time scales involved, an experimental approach is often difficult, not to say impossible, however desirable it could be. Described in this article is a "laboratory" setup that yields data of a type that is amenable to theoretical methods of prediction. Observations are made of a critical slowing down in the noisy signal of a solder wire creeping under constant stress. This effect is shown to be a fair signal of the forthcoming catastrophe in both of two dynamical models. The first is an "abstract" model in which a time dependent quantity drifts slowly but makes quick jumps from time to time. The second is a realistic physical model for the collective motion of dislocations (the Ananthakrishna set of equations for creep). Hope thus exists that similar changes in the response to ...

  7. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  8. Leptogenesis in minimal predictive seesaw models

    Science.gov (United States)

    Björkeroth, Fredrik; de Anda, Francisco J.; de Medeiros Varzielas, Ivo; King, Stephen F.

    2015-10-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to ( ν e , ν μ , ν τ ) proportional to (0, 1, 1) and (1, n, n - 2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A 4 vacuum alignment provides the required Yukawa structures with n = 3, while a {{Z}}_9 symmetry fixes the relatives phase to be a ninth root of unity.

  9. Tenure Profiles and Efficient Separation in a Stochastic Productivity Model

    NARCIS (Netherlands)

    Buhai, I.S.; Teulings, C.N.

    2014-01-01

    We develop a theoretical model based on efficient bargaining, where both log outside productivity and log productivity in the current job follow a random walk. This setting allows the application of real option theory. We derive the efficient worker-firm separation rule. We show that wage data from

  10. Building predictive models of soil particle-size distribution

    Directory of Open Access Journals (Sweden)

    Alessandro Samuel-Rosa

    2013-04-01

    Full Text Available Is it possible to build predictive models (PMs of soil particle-size distribution (psd in a region with complex geology and a young and unstable land-surface? The main objective of this study was to answer this question. A set of 339 soil samples from a small slope catchment in Southern Brazil was used to build PMs of psd in the surface soil layer. Multiple linear regression models were constructed using terrain attributes (elevation, slope, catchment area, convergence index, and topographic wetness index. The PMs explained more than half of the data variance. This performance is similar to (or even better than that of the conventional soil mapping approach. For some size fractions, the PM performance can reach 70 %. Largest uncertainties were observed in geologically more complex areas. Therefore, significant improvements in the predictions can only be achieved if accurate geological data is made available. Meanwhile, PMs built on terrain attributes are efficient in predicting the particle-size distribution (psd of soils in regions of complex geology.

  11. Contrasting Water-Use Efficiency (WUE) Responses of a Potato Mapping Population and Capability of Modified Ball-Berry Model to Predict Stomatal Conductance and WUE Measured at Different Environmental Conditions

    DEFF Research Database (Denmark)

    Kaminski, K. P.; Sørensen, Kirsten Kørup; Kristensen, Kristian

    2015-01-01

    .001). The leaf chlorophyll content was lower in the high-WUE group indicating that the higher net photosynthesis rate was not due to higher leaf-N status. Less negative value of carbon isotope discrimination (δ13C) in the high-WUE group was only found in 2011. A modified Ball-Berry model was fitted to measured...... stomatal conductance (gs) under the systematically varied environmental conditions to identify parameter differences between the two groups, which could explain their contrasting WUE. Compared to the low-WUE group, the high-WUE group showed consistently lower values of the parameter m, which is inversely...... 0.5 to 3.5 kPa. The mapping population was normally distributed with respect to WUE suggesting a multigenic nature of this trait. The WUE groups identified can be further employed for quantitative trait loci (QTL) analysis by use of gene expression studies or genome resequencing. The differences...

  12. Efficient Algorithms for Parsing the DOP Model

    CERN Document Server

    Goodman, J

    1996-01-01

    Excellent results have been reported for Data-Oriented Parsing (DOP) of natural language texts (Bod, 1993). Unfortunately, existing algorithms are both computationally intensive and difficult to implement. Previous algorithms are expensive due to two factors: the exponential number of rules that must be generated and the use of a Monte Carlo parsing algorithm. In this paper we solve the first problem by a novel reduction of the DOP model to a small, equivalent probabilistic context-free grammar. We solve the second problem by a novel deterministic parsing strategy that maximizes the expected number of correct constituents, rather than the probability of a correct parse tree. Using the optimizations, experiments yield a 97% crossing brackets rate and 88% zero crossing brackets rate. This differs significantly from the results reported by Bod, and is comparable to results from a duplication of Pereira and Schabes's (1992) experiment on the same data. We show that Bod's results are at least partially due to an e...

  13. Standardizing the performance evaluation of short-term wind prediction models

    DEFF Research Database (Denmark)

    Madsen, Henrik; Pinson, Pierre; Kariniotakis, G.

    2005-01-01

    Short-term wind power prediction is a primary requirement for efficient large-scale integration of wind generation in power systems and electricity markets. The choice of an appropriate prediction model among the numerous available models is not trivial, and has to be based on an objective...... evaluation of model performance. This paper proposes a standardized protocol for the evaluation of short-term wind-poser preciction systems. A number of reference prediction models are also described, and their use for performance comparison is analysed. The use of the protocol is demonstrated using results...

  14. Probabilistic application of a fugacity model to predict triclosan fate during wastewater treatment.

    Science.gov (United States)

    Bock, Michael; Lyndall, Jennifer; Barber, Timothy; Fuchsman, Phyllis; Perruchon, Elyse; Capdevielle, Marie

    2010-07-01

    The fate and partitioning of the antimicrobial compound, triclosan, in wastewater treatment plants (WWTPs) is evaluated using a probabilistic fugacity model to predict the range of triclosan concentrations in effluent and secondary biosolids. The WWTP model predicts 84% to 92% triclosan removal, which is within the range of measured removal efficiencies (typically 70% to 98%). Triclosan is predominantly removed by sorption and subsequent settling of organic particulates during primary treatment and by aerobic biodegradation during secondary treatment. Median modeled removal efficiency due to sorption is 40% for all treatment phases and 31% in the primary treatment phase. Median modeled removal efficiency due to biodegradation is 48% for all treatment phases and 44% in the secondary treatment phase. Important factors contributing to variation in predicted triclosan concentrations in effluent and biosolids include influent concentrations, solids concentrations in settling tanks, and factors related to solids retention time. Measured triclosan concentrations in biosolids and non-United States (US) effluent are consistent with model predictions. However, median concentrations in US effluent are over-predicted with this model, suggesting that differences in some aspect of treatment practices not incorporated in the model (e.g., disinfection methods) may affect triclosan removal from effluent. Model applications include predicting changes in environmental loadings associated with new triclosan applications and supporting risk analyses for biosolids-amended land and effluent receiving waters.

  15. Erratum: Probabilistic application of a fugacity model to predict triclosan fate during wastewater treatment.

    Science.gov (United States)

    Bock, Michael; Lyndall, Jennifer; Barber, Timothy; Fuchsman, Phyllis; Perruchon, Elyse; Capdevielle, Marie

    2010-10-01

    The fate and partitioning of the antimicrobial compound, triclosan, in wastewater treatment plants (WWTPs) is evaluated using a probabilistic fugacity model to predict the range of triclosan concentrations in effluent and secondary biosolids. The WWTP model predicts 84% to 92% triclosan removal, which is within the range of measured removal efficiencies (typically 70% to 98%). Triclosan is predominantly removed by sorption and subsequent settling of organic particulates during primary treatment and by aerobic biodegradation during secondary treatment. Median modeled removal efficiency due to sorption is 40% for all treatment phases and 31% in the primary treatment phase. Median modeled removal efficiency due to biodegradation is 48% for all treatment phases and 44% in the secondary treatment phase. Important factors contributing to variation in predicted triclosan concentrations in effluent and biosolids include influent concentrations, solids concentrations in settling tanks, and factors related to solids retention time. Measured triclosan concentrations in biosolids and non-United States (US) effluent are consistent with model predictions. However, median concentrations in US effluent are over-predicted with this model, suggesting that differences in some aspect of treatment practices not incorporated in the model (e.g., disinfection methods) may affect triclosan removal from effluent. Model applications include predicting changes in environmental loadings associated with new triclosan applications and supporting risk analyses for biosolids-amended land and effluent receiving waters.

  16. Explained Variation and Predictive Accuracy with an Extension to the Competing Risks Model

    DEFF Research Database (Denmark)

    Rosthøj, Susanne; Keiding, Niels

    2003-01-01

    Competing risks; efficiency; explained variation; misspecification; predictive accuracy; survival analysis......Competing risks; efficiency; explained variation; misspecification; predictive accuracy; survival analysis...

  17. Efficiency of Iranian forest industry based on DEA models

    Institute of Scientific and Technical Information of China (English)

    Soleiman Mohammadi Limaei

    2013-01-01

    Data Envelopment Analysis (DEA) is a mathematical tech-nique to assess relative efficiencies of decision making units (DMUs). The efficiency of 14 Iranian forest companies and forest management units was investigated in 2010. Efficiency of the companies was esti-mated by using a traditional DEA model and a two-stage DEA model. Traditional DEA models consider all DMU activities as a black box and ignore the intermediate products, while two-stage models address inter-mediate processes. LINGO software was used for analysis. Overall pro-duction was divided into to processes for analyses by the two-stage model, timber harvest and marketing. Wilcoxon’s signed-rank test was used to identify the differences of average efficiency in the harvesting and marketing sub-process. Weak performance in the harvesting sub-process was the cause of low efficiency in 2010. Companies such as Neka Chob and Kelardasht proved efficient at timber harvest, and Neka Chob forest company scored highest in overall efficiency. Finally, the reference units identified according to the results of two-stage DEA analysis.

  18. Ozonolysis of Model Olefins-Efficiency of Antiozonants

    NARCIS (Netherlands)

    Huntink, N.M.; Datta, Rabin; Talma, Auke; Noordermeer, Jacobus W.M.

    2006-01-01

    In this study, the efficiency of several potential long lasting antiozonants was studied by ozonolysis of model olefins. 2-Methyl-2-pentene was selected as a model for natural rubber (NR) and 5-phenyl-2-hexene as a model for styrene butadiene rubber (SBR). A comparison was made between the

  19. Efficient modelling, generation and analysis of Markov automata

    NARCIS (Netherlands)

    Timmer, Mark

    2013-01-01

    Quantitative model checking is concerned with the verification of both quantitative and qualitative properties over models incorporating quantitative information. Increases in expressivity of these models allow more types of systems to be analysed, but also raise the difficulty of their efficient an

  20. Models for estimation of land remote sensing satellites operational efficiency

    Science.gov (United States)

    Kurenkov, Vladimir I.; Kucherov, Alexander S.

    2017-01-01

    The paper deals with the problem of estimation of land remote sensing satellites operational efficiency. Appropriate mathematical models have been developed. Some results obtained with the help of the software worked out in Delphi programming support environment are presented.

  1. Efficient Work Team Scheduling: Using Psychological Models of Knowledge Retention to Improve Code Writing Efficiency

    Directory of Open Access Journals (Sweden)

    Michael J. Pelosi

    2014-12-01

    Full Text Available Development teams and programmers must retain critical information about their work during work intervals and gaps in order to improve future performance when work resumes. Despite time lapses, project managers want to maximize coding efficiency and effectiveness. By developing a mathematically justified, practically useful, and computationally tractable quantitative and cognitive model of learning and memory retention, this study establishes calculations designed to maximize scheduling payoff and optimize developer efficiency and effectiveness.

  2. Comparing model predictions for ecosystem-based management

    DEFF Research Database (Denmark)

    Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste

    2016-01-01

    Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...

  3. Evaluating Energy Efficiency Policies with Energy-Economy Models

    Energy Technology Data Exchange (ETDEWEB)

    Mundaca, Luis; Neij, Lena; Worrell, Ernst; McNeil, Michael A.

    2010-08-01

    The growing complexities of energy systems, environmental problems and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyse bottom-up energy-economy models and corresponding evaluation studies on energy efficiency policies to induce technological change. We use the household sector as a case study. Our analysis focuses on decision frameworks for technology choice, type of evaluation being carried out, treatment of market and behavioural failures, evaluated policy instruments, and key determinants used to mimic policy instruments. Although the review confirms criticism related to energy-economy models (e.g. unrealistic representation of decision-making by consumers when choosing technologies), they provide valuable guidance for policy evaluation related to energy efficiency. Different areas to further advance models remain open, particularly related to modelling issues, techno-economic and environmental aspects, behavioural determinants, and policy considerations.

  4. Energy efficient engine: Turbine transition duct model technology report

    Science.gov (United States)

    Leach, K.; Thurlin, R.

    1982-01-01

    The Low-Pressure Turbine Transition Duct Model Technology Program was directed toward substantiating the aerodynamic definition of a turbine transition duct for the Energy Efficient Engine. This effort was successful in demonstrating an aerodynamically viable compact duct geometry and the performance benefits associated with a low camber low-pressure turbine inlet guide vane. The transition duct design for the flight propulsion system was tested and the pressure loss goal of 0.7 percent was verified. Also, strut fairing pressure distributions, as well as wall pressure coefficients, were in close agreement with analytical predictions. Duct modifications for the integrated core/low spool were also evaluated. The total pressure loss was 1.59 percent. Although the increase in exit area in this design produced higher wall loadings, reflecting a more aggressive aerodynamic design, pressure profiles showed no evidence of flow separation. Overall, the results acquired have provided pertinent design and diagnostic information for the design of a turbine transition duct for both the flight propulsion system and the integrated core/low spool.

  5. Trait Mindfulness Predicts Efficient Top-Down Attention to and Discrimination of Facial Expressions.

    Science.gov (United States)

    Quaglia, Jordan T; Goodman, Robert J; Brown, Kirk Warren

    2016-06-01

    In social situations, skillful regulation of emotion and behavior depends on efficiently discerning others' emotions. Identifying factors that promote timely and accurate discernment of facial expressions can therefore advance understanding of social emotion regulation and behavior. The present research examined whether trait mindfulness predicts neural and behavioral markers of early top-down attention to, and efficient discrimination of, socioemotional stimuli. Attention-based event-related potentials (ERPs) and behavioral responses were recorded while participants (N = 62; White; 67% female; Mage = 19.09 years, SD = 2.14 years) completed an emotional go/no-go task involving happy, neutral, and fearful facial expressions. Mindfulness predicted larger (more negative) N100 and N200 ERP amplitudes to both go and no-go stimuli. Mindfulness also predicted faster response time that was not attributable to a speed-accuracy trade-off. Significant relations held after accounting for attentional control or social anxiety. This study adds neurophysiological support for foundational accounts that mindfulness entails moment-to-moment attention with lower tendencies toward habitual patterns of responding. Mindfulness may enhance the quality of social behavior in socioemotional contexts by promoting efficient top-down attention to and discrimination of others' emotions, alongside greater monitoring and inhibition of automatic response tendencies.

  6. Remaining Useful Lifetime (RUL - Probabilistic Predictive Model

    Directory of Open Access Journals (Sweden)

    Ephraim Suhir

    2011-01-01

    Full Text Available Reliability evaluations and assurances cannot be delayed until the device (system is fabricated and put into operation. Reliability of an electronic product should be conceived at the early stages of its design; implemented during manufacturing; evaluated (considering customer requirements and the existing specifications, by electrical, optical and mechanical measurements and testing; checked (screened during manufacturing (fabrication; and, if necessary and appropriate, maintained in the field during the product’s operation Simple and physically meaningful probabilistic predictive model is suggested for the evaluation of the remaining useful lifetime (RUL of an electronic device (system after an appreciable deviation from its normal operation conditions has been detected, and the increase in the failure rate and the change in the configuration of the wear-out portion of the bathtub has been assessed. The general concepts are illustrated by numerical examples. The model can be employed, along with other PHM forecasting and interfering tools and means, to evaluate and to maintain the high level of the reliability (probability of non-failure of a device (system at the operation stage of its lifetime.

  7. A Predictive Model of Geosynchronous Magnetopause Crossings

    CERN Document Server

    Dmitriev, A; Chao, J -K

    2013-01-01

    We have developed a model predicting whether or not the magnetopause crosses geosynchronous orbit at given location for given solar wind pressure Psw, Bz component of interplanetary magnetic field (IMF) and geomagnetic conditions characterized by 1-min SYM-H index. The model is based on more than 300 geosynchronous magnetopause crossings (GMCs) and about 6000 minutes when geosynchronous satellites of GOES and LANL series are located in the magnetosheath (so-called MSh intervals) in 1994 to 2001. Minimizing of the Psw required for GMCs and MSh intervals at various locations, Bz and SYM-H allows describing both an effect of magnetopause dawn-dusk asymmetry and saturation of Bz influence for very large southward IMF. The asymmetry is strong for large negative Bz and almost disappears when Bz is positive. We found that the larger amplitude of negative SYM-H the lower solar wind pressure is required for GMCs. We attribute this effect to a depletion of the dayside magnetic field by a storm-time intensification of t...

  8. Predictive modeling for EBPC in EBDW

    Science.gov (United States)

    Zimmermann, Rainer; Schulz, Martin; Hoppe, Wolfgang; Stock, Hans-Jürgen; Demmerle, Wolfgang; Zepka, Alex; Isoyan, Artak; Bomholt, Lars; Manakli, Serdar; Pain, Laurent

    2009-10-01

    We demonstrate a flow for e-beam proximity correction (EBPC) to e-beam direct write (EBDW) wafer manufacturing processes, demonstrating a solution that covers all steps from the generation of a test pattern for (experimental or virtual) measurement data creation, over e-beam model fitting, proximity effect correction (PEC), and verification of the results. We base our approach on a predictive, physical e-beam simulation tool, with the possibility to complement this with experimental data, and the goal of preparing the EBPC methods for the advent of high-volume EBDW tools. As an example, we apply and compare dose correction and geometric correction for low and high electron energies on 1D and 2D test patterns. In particular, we show some results of model-based geometric correction as it is typical for the optical case, but enhanced for the particularities of e-beam technology. The results are used to discuss PEC strategies, with respect to short and long range effects.

  9. Reduction efficiency prediction of CENIBRA's recovery boiler by direct minimization of gibbs free energy

    Directory of Open Access Journals (Sweden)

    W. L. Silva

    2008-09-01

    Full Text Available The reduction efficiency is an important variable during the black liquor burning process in the Kraft recovery boiler. This variable value is obtained by slow experimental routines and the delay of this measure disturbs the pulp and paper industry customary control. This paper describes an optimization approach for the reduction efficiency determination in the furnace bottom of the recovery boiler based on the minimization of the Gibbs free energy. The industrial data used in this study were directly obtained from CENIBRA's data acquisition system. The resulting approach is able to predict the steady state behavior of the chemical composition of the furnace recovery boiler, - especially the reduction efficiency when different operational conditions are used. This result confirms the potential of this approach in the analysis of the daily operation of the recovery boiler.

  10. Efficient Cluster Algorithm for CP(N-1) Models

    CERN Document Server

    Beard, B B; Riederer, S; Wiese, U J

    2006-01-01

    Despite several attempts, no efficient cluster algorithm has been constructed for CP(N-1) models in the standard Wilson formulation of lattice field theory. In fact, there is a no-go theorem that prevents the construction of an efficient Wolff-type embedding algorithm. In this paper, we construct an efficient cluster algorithm for ferromagnetic SU(N)-symmetric quantum spin systems. Such systems provide a regularization for CP(N-1) models in the framework of D-theory. We present detailed studies of the autocorrelations and find a dynamical critical exponent that is consistent with z = 0.

  11. Efficient cluster algorithm for CP(N-1) models

    Science.gov (United States)

    Beard, B. B.; Pepe, M.; Riederer, S.; Wiese, U.-J.

    2006-11-01

    Despite several attempts, no efficient cluster algorithm has been constructed for CP(N-1) models in the standard Wilson formulation of lattice field theory. In fact, there is a no-go theorem that prevents the construction of an efficient Wolff-type embedding algorithm. In this paper, we construct an efficient cluster algorithm for ferromagnetic SU(N)-symmetric quantum spin systems. Such systems provide a regularization for CP(N-1) models in the framework of D-theory. We present detailed studies of the autocorrelations and find a dynamical critical exponent that is consistent with z=0.

  12. Modeling of Methods to Control Heat-Consumption Efficiency

    Science.gov (United States)

    Tsynaeva, E. A.; Tsynaeva, A. A.

    2016-11-01

    In this work, consideration has been given to thermophysical processes in automated heat consumption control systems (AHCCSs) of buildings, flow diagrams of these systems, and mathematical models describing the thermophysical processes during the systems' operation; an analysis of adequacy of the mathematical models has been presented. A comparison has been made of the operating efficiency of the systems and the methods to control the efficiency. It has been determined that the operating efficiency of an AHCCS depends on its diagram and the temperature chart of central quality control (CQC) and also on the temperature of a low-grade heat source for the system with a heat pump.

  13. DEA Game Cross-Efficiency Model to Urban Public Infrastructure Investment Comprehensive Efficiency of China

    Directory of Open Access Journals (Sweden)

    Yu Sun

    2016-01-01

    Full Text Available In managerial application, data envelopment analysis (DEA is used by numerous studies to evaluate performances and solve the allocation problem. As the problem of infrastructure investment becomes more and more important in Chinese cities, it is of vital necessity to evaluate the investment efficiency and assign the fund. In practice, there are competitions among cities due to the scarcity of investment funds. However, the traditional DEA model is a pure self-evaluation model without considering the impacts of the other decision-making units (DMUs. Even though using the cross-efficiency model can figure out the best multiplier bundle for the unit and other DMUs, the solution is not unique. Therefore, this paper introduces the game theory into DEA cross-efficiency model to evaluate the infrastructure investment efficiency when cities compete with each other. In this paper, we analyze the case involving 30 provincial capital cities of China. And the result shows that the approach can accomplish a unique and efficient solution for each city (DMU after the investment fund is allocated as an input variable.

  14. Prediction of cavitation damage on spillway using K-nearest neighbor modeling.

    Science.gov (United States)

    Fadaei Kermani, E; Barani, G A; Ghaeini-Hessaroeyeh, M

    2015-01-01

    Cavitation is a common and destructive process on spillways that threatens the stability of the structure and causes damage. In this study, based on the nearest neighbor model, a method has been presented to predict cavitation damage on spillways. The model was tested using data from the Shahid Abbaspour dam spillway in Iran. The level of spillway cavitation damage was predicted for eight different flow rates, using the nearest neighbor model. Moreover, based on the cavitation index, five damage levels from no damage to major damage have been determined. Results showed that the present model predicted damage locations and levels close to observed damage during past floods. Finally, the efficiency and precision of the model was quantified by statistical coefficients. Appropriate values of the correlation coefficient, root mean square error, mean absolute error and coefficient of residual mass show the present model is suitable and efficient.

  15. REALIGNED MODEL PREDICTIVE CONTROL OF A PROPYLENE DISTILLATION COLUMN

    Directory of Open Access Journals (Sweden)

    A. I. Hinojosa

    Full Text Available Abstract In the process industry, advanced controllers usually aim at an economic objective, which usually requires closed-loop stability and constraints satisfaction. In this paper, the application of a MPC in the optimization structure of an industrial Propylene/Propane (PP splitter is tested with a controller based on a state space model, which is suitable for heavily disturbed environments. The simulation platform is based on the integration of the commercial dynamic simulator Dynsim® and the rigorous steady-state optimizer ROMeo® with the real-time facilities of Matlab. The predictive controller is the Infinite Horizon Model Predictive Control (IHMPC, based on a state-space model that that does not require the use of a state observer because the non-minimum state is built with the past inputs and outputs. The controller considers the existence of zone control of the outputs and optimizing targets for the inputs. We verify that the controller is efficient to control the propylene distillation system in a disturbed scenario when compared with a conventional controller based on a state observer. The simulation results show a good performance in terms of stability of the controller and rejection of large disturbances in the composition of the feed of the propylene distillation column.

  16. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  17. Bayesian prediction of spatial count data using generalized linear mixed models

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge

    2002-01-01

    Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, we...... demonstrate that so-called Langevin-Hastings updates are useful for efficient simulation of the posterior distributions, and we discuss computational issues concerning prediction....

  18. A Predictive Distribution Model for Cooperative Braking System of an Electric Vehicle

    OpenAIRE

    Hongqiang Guo; Hongwen He; Xuelian Xiao

    2014-01-01

    A predictive distribution model for a series cooperative braking system of an electric vehicle is proposed, which can solve the real-time problem of the optimum braking force distribution. To get the predictive distribution model, firstly three disciplines of the maximum regenerative energy recovery capability, the maximum generating efficiency and the optimum braking stability are considered, then an off-line process optimization stream is designed, particularly the optimal Latin hypercube d...

  19. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  20. RFI modeling and prediction approach for SATOP applications: RFI prediction models

    Science.gov (United States)

    Nguyen, Tien M.; Tran, Hien T.; Wang, Zhonghai; Coons, Amanda; Nguyen, Charles C.; Lane, Steven A.; Pham, Khanh D.; Chen, Genshe; Wang, Gang

    2016-05-01

    This paper describes a technical approach for the development of RFI prediction models using carrier synchronization loop when calculating Bit or Carrier SNR degradation due to interferences for (i) detecting narrow-band and wideband RFI signals, and (ii) estimating and predicting the behavior of the RFI signals. The paper presents analytical and simulation models and provides both analytical and simulation results on the performance of USB (Unified S-Band) waveforms in the presence of narrow-band and wideband RFI signals. The models presented in this paper will allow the future USB command systems to detect the RFI presence, estimate the RFI characteristics and predict the RFI behavior in real-time for accurate assessment of the impacts of RFI on the command Bit Error Rate (BER) performance. The command BER degradation model presented in this paper also allows the ground system operator to estimate the optimum transmitted SNR to maintain a required command BER level in the presence of both friendly and un-friendly RFI sources.

  1. Efficient Finite Element Modeling of Elastodynamic Scattering from Near Surface and Surface-Breaking Defects

    Science.gov (United States)

    Velichko, A.; Wilcox, P. D.

    2011-06-01

    A robust and efficient technique for predicting the complete scattering behavior for an arbitrarily-shaped defect which is located near a free surface in an otherwise homogeneous anisotropic half-space is presented that can be implemented in a commercial FE package. The spatial size of the modeling domain around the defect is as small as possible to minimize computational expense and a minimum number of models are executed. Example results for 2D wave scattering in isotropic material are presented.

  2. Natural selection at work: an accelerated evolutionary computing approach to predictive model selection

    Directory of Open Access Journals (Sweden)

    Olcay Akman

    2010-07-01

    Full Text Available We implement genetic algorithm based predictive model building as an alternative to the traditional stepwise regression. We then employ the Information Complexity Measure (ICOMP as a measure of model fitness instead of the commonly used measure of R-square. Furthermore, we propose some modifications to the genetic algorithm to increase the overall efficiency.

  3. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to unders

  4. Efficiency is a legitimate consideration in equitable distribution of cadaveric transplants: development of an efficiency-equality model of equity.

    Science.gov (United States)

    Higgins, R M; Johnson, R; Jones, M N A; Rudge, C

    2005-03-01

    It is proposed that equity is a trade-off, or compromise, between equality and efficiency. The kidney transplant allocation algorithm currently used in the United Kingdom (NAT) was tested in the efficiency-equity model. In an exercise of 2000 past UK donors and a dynamic waiting list of 5000 potential recipients, 4000 transplants were allocated according either by NAT, by equal allocation (EQ) (a lottery), or by efficiency (EF). Diabetic recipients received 7.4% of transplants in NAT, 8.6% in EQ, and 0% in EF; paediatric recipients received 6.8% in NAT, 0.6% in EQ, and 0.7% in EF model. For HLA matching, there were 77.9% favourable or 000 matches in NAT, 3.0% in EQ, and 53.1% in EF. Predicted survival showed better outcomes in EF versus NAT (P < .0001) and in NAT versus EQ (P = .05). The NAT allocation system favours paediatric recipients and does not deny diabetics the chance of a transplant, broadly in line with published public and professional opinions. The NAT scheme achieves better HLA matching than the EF model, and this suggests that the rationale for allocation based primarily on HLA matching could be reexamined.

  5. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  6. Predictability of the Indian Ocean Dipole in the coupled models

    Science.gov (United States)

    Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao

    2017-03-01

    In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.

  7. Efficient Modeling for Short Channel MOS Circuit Simulation.

    Science.gov (United States)

    1982-08-01

    of Conpube Science and Engineering Key words and phrases: MOS Trasistor Modeling. Numerical Optimization. None Parameter Estimation. sacunrI... current - voltage characteristics of MOS transistors. Although capacitances and their model parameters have been omitted for simplicity, there is no...constructing a circuit model of the MOS field-effect transistor. The model is nothing more than a set of equations which predicts the device’s current -voltage

  8. Time – Delay Simulated Artificial Neural Network Models for Predicting Shelf Life of Processed Cheese

    Directory of Open Access Journals (Sweden)

    Sumit Goyal

    2012-05-01

    Full Text Available This paper highlights the significance of Time-Delay ANN models for predicting shelf life of processed cheese stored at 7-8o^C. Bayesian regularization algorithm was selected as training function. Number of neurons in single and multiple hidden layers varied from 1 to 20. The network was trained with up to 100 epochs. Mean square error, root mean square error, coefficient of determination and nash - Sutcliffe coefficient were used for calculating the prediction capability of the developed models. Time-Delay ANN models with multilayer are quite efficient in predicting the shelf life of processed cheese stored at 7-8o^C.

  9. Cognitive trait anxiety, situational stress, and mental effort predict shifting efficiency: Implications for attentional control theory.

    Science.gov (United States)

    Edwards, Elizabeth J; Edwards, Mark S; Lyvers, Michael

    2015-06-01

    Attentional control theory (ACT) predicts that trait anxiety and situational stress interact to impair performance on tasks that involve attentional shifting. The theory suggests that anxious individuals recruit additional effort to prevent shortfalls in performance effectiveness (accuracy), with deficits becoming evident in processing efficiency (the relationship between accuracy and time taken to perform the task). These assumptions, however, have not been systematically tested. The relationship between cognitive trait anxiety, situational stress, and mental effort in a shifting task (Wisconsin Card Sorting Task) was investigated in 90 participants. Cognitive trait anxiety was operationalized using questionnaire scores, situational stress was manipulated through ego threat instructions, and mental effort was measured using a visual analogue scale. Dependent variables were performance effectiveness (an inverse proportion of perseverative errors) and processing efficiency (an inverse proportion of perseverative errors divided by response time on perseverative error trials). The predictors were not associated with performance effectiveness; however, we observed a significant 3-way interaction on processing efficiency. At higher mental effort (+1 SD), higher cognitive trait anxiety was associated with poorer efficiency independently of situational stress, whereas at lower effort (-1 SD), this relationship was highly significant and most pronounced for those in the high-stress condition. These results are important because they provide the first systematic test of the relationship between trait anxiety, situational stress, and mental effort on shifting performance. The data are also consistent with the notion that effort moderates the relationship between anxiety and shifting efficiency, but not effectiveness.

  10. Elastodynamic modeling and joint reaction prediction for 3-PRS PKM

    Institute of Scientific and Technical Information of China (English)

    张俊; 赵艳芹

    2015-01-01

    To gain a thorough understanding of the load state of parallel kinematic machines (PKMs), a methodology of elastodynamic modeling and joint reaction prediction is proposed. For this purpose, a Sprint Z3 model is used as a case study to illustrate the process of joint reaction analysis. The substructure synthesis method is applied to deriving an analytical elastodynamic model for the 3-PRS PKM device, in which the compliances of limbs and joints are considered. Each limb assembly is modeled as a spatial beam with non-uniform cross-section supported by lumped virtual springs at the centers of revolute and spherical joints. By introducing the deformation compatibility conditions between the limbs and the platform, the governing equations of motion of the system are obtained. After degenerating the governing equations into quasi-static equations, the effects of the gravity on system deflections and joint reactions are investigated with the purpose of providing useful information for the kinematic calibration and component strength calculations as well as structural optimizations of the 3-PRS PKM module. The simulation results indicate that the elastic deformation of the moving platform in the direction of gravity caused by gravity is quite large and cannot be ignored. Meanwhile, the distributions of joint reactions are axisymmetric and position-dependent. It is worthy to note that the proposed elastodynamic modeling method combines the benefits of accuracy of finite element method and concision of analytical method so that it can be used to predict the stiffness characteristics and joint reactions of a PKM throughout its entire workspace in a quick and accurate manner. Moreover, the present model can also be easily applied to evaluating the overall rigidity performance as well as statics of other PKMs with high efficiency after minor modifications.

  11. Predictive Modeling of Defibrillation utilizing Hexahedral and Tetrahedral Finite Element Models: Recent Advances

    Science.gov (United States)

    Triedman, John K.; Jolley, Matthew; Stinstra, Jeroen; Brooks, Dana H.; MacLeod, Rob

    2008-01-01

    ICD implants may be complicated by body size and anatomy. One approach to this problem has been the adoption of creative, extracardiac implant strategies using standard ICD components. Because data on safety or efficacy of such ad hoc implant strategies is lacking, we have developed image-based finite element models (FEMs) to compare electric fields and expected defibrillation thresholds (DFTs) using standard and novel electrode locations. In this paper, we review recently published studies by our group using such models, and progress in meshing strategies to improve efficiency and visualization. Our preliminary observations predict that they may be large changes in DFTs with clinically relevant variations of electrode placement. Extracardiac ICDs of various lead configurations are predicted to be effective in both children and adults. This approach may aid both ICD development and patient-specific optimization of electrode placement, but the simplified nature of current models dictates further development and validation prior to clinical or industrial utilization. PMID:18817926

  12. Predicting the oral uptake efficiency of chemicals in mammals: Combining the hydrophilic and lipophilic range

    Energy Technology Data Exchange (ETDEWEB)

    O' Connor, Isabel A., E-mail: i.oconnor@science.ru.nl [Radboud University Nijmegen, Institute for Water and Wetland Research, Department of Environmental Science, P.O. Box 9010, NL-6500 GL, Nijmegen (Netherlands); Huijbregts, Mark A.J., E-mail: m.huijbregts@science.ru.nl [Radboud University Nijmegen, Institute for Water and Wetland Research, Department of Environmental Science, P.O. Box 9010, NL-6500 GL, Nijmegen (Netherlands); Ragas, Ad M.J., E-mail: a.ragas@science.ru.nl [Radboud University Nijmegen, Institute for Water and Wetland Research, Department of Environmental Science, P.O. Box 9010, NL-6500 GL, Nijmegen (Netherlands); Open University, School of Science, P.O. Box 2960,6401 DL Heerlen (Netherlands); Hendriks, A. Jan, E-mail: a.j.hendriks@science.ru.nl [Radboud University Nijmegen, Institute for Water and Wetland Research, Department of Environmental Science, P.O. Box 9010, NL-6500 GL, Nijmegen (Netherlands)

    2013-01-01

    Environmental risk assessment requires models for estimating the bioaccumulation of untested compounds. So far, bioaccumulation models have focused on lipophilic compounds, and only a few have included hydrophilic compounds. Our aim was to extend an existing bioaccumulation model to estimate the oral uptake efficiency of pollutants in mammals for compounds over a wide K{sub ow} range with an emphasis on hydrophilic compounds, i.e. compounds in the lower K{sub ow} range. Usually, most models use octanol as a single surrogate for the membrane and thus neglect the bilayer structure of the membrane. However, compounds with polar groups can have different affinities for the different membrane regions. Therefore, an existing bioaccumulation model was extended by dividing the diffusion resistance through the membrane into an outer and inner membrane resistance, where the solvents octanol and heptane were used as surrogates for these membrane regions, respectively. The model was calibrated with uptake efficiencies of environmental pollutants measured in different mammals during feeding studies combined with human oral uptake efficiencies of pharmaceuticals. The new model estimated the uptake efficiency of neutral (RMSE = 14.6) and dissociating (RMSE = 19.5) compounds with logK{sub ow} ranging from − 10 to + 8. The inclusion of the K{sub hw} improved uptake estimation for 33% of the hydrophilic compounds (logK{sub ow} < 0) (r{sup 2} = 0.51, RMSE = 22.8) compared with the model based on K{sub ow} only (r{sup 2} = 0.05, RMSE = 34.9), while hydrophobic compounds (logK{sub ow} > 0) were estimated equally by both model versions with RMSE = 15.2 (K{sub ow} and K{sub hw}) and RMSE = 15.7 (K{sub ow} only). The model can be used to estimate the oral uptake efficiency for both hydrophilic and hydrophobic compounds. -- Highlights: ► A mechanistic model was developed to estimate oral uptake efficiency. ► Model covers wide logK{sub ow} range (- 10 to + 8) and several mammalian

  13. Estimating Energy Conversion Efficiency of Thermoelectric Materials: Constant Property Versus Average Property Models

    Science.gov (United States)

    Armstrong, Hannah; Boese, Matthew; Carmichael, Cody; Dimich, Hannah; Seay, Dylan; Sheppard, Nathan; Beekman, Matt

    2017-01-01

    Maximum thermoelectric energy conversion efficiencies are calculated using the conventional "constant property" model and the recently proposed "cumulative/average property" model (Kim et al. in Proc Natl Acad Sci USA 112:8205, 2015) for 18 high-performance thermoelectric materials. We find that the constant property model generally predicts higher energy conversion efficiency for nearly all materials and temperature differences studied. Although significant deviations are observed in some cases, on average the constant property model predicts an efficiency that is a factor of 1.16 larger than that predicted by the average property model, with even lower deviations for temperature differences typical of energy harvesting applications. Based on our analysis, we conclude that the conventional dimensionless figure of merit ZT obtained from the constant property model, while not applicable for some materials with strongly temperature-dependent thermoelectric properties, remains a simple yet useful metric for initial evaluation and/or comparison of thermoelectric materials, provided the ZT at the average temperature of projected operation, not the peak ZT, is used.

  14. Leptogenesis in minimal predictive seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Björkeroth, Fredrik [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom); Anda, Francisco J. de [Departamento de Física, CUCEI, Universidad de Guadalajara,Guadalajara (Mexico); Varzielas, Ivo de Medeiros; King, Stephen F. [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom)

    2015-10-15

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the “atmospheric” and “solar” neutrino masses with Yukawa couplings to (ν{sub e},ν{sub μ},ν{sub τ}) proportional to (0,1,1) and (1,n,n−2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A{sub 4} vacuum alignment provides the required Yukawa structures with n=3, while a ℤ{sub 9} symmetry fixes the relatives phase to be a ninth root of unity.

  15. QSPR Models for Octane Number Prediction

    Directory of Open Access Journals (Sweden)

    Jabir H. Al-Fahemi

    2014-01-01

    Full Text Available Quantitative structure-property relationship (QSPR is performed as a means to predict octane number of hydrocarbons via correlating properties to parameters calculated from molecular structure; such parameters are molecular mass M, hydration energy EH, boiling point BP, octanol/water distribution coefficient logP, molar refractivity MR, critical pressure CP, critical volume CV, and critical temperature CT. Principal component analysis (PCA and multiple linear regression technique (MLR were performed to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The results of PCA explain the interrelationships between octane number and different variables. Correlation coefficients were calculated using M.S. Excel to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The data set was split into training of 40 hydrocarbons and validation set of 25 hydrocarbons. The linear relationship between the selected descriptors and the octane number has coefficient of determination (R2=0.932, statistical significance (F=53.21, and standard errors (s =7.7. The obtained QSPR model was applied on the validation set of octane number for hydrocarbons giving RCV2=0.942 and s=6.328.

  16. Energy technologies and energy efficiency in economic modelling

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    1998-01-01

    This paper discusses different approaches to incorporating energy technologies and technological development in energy-economic models. Technological development is a very important issue in long-term energy demand projections and in environmental analyses. Different assumptions on technological ...... of renewable energy and especially wind power will increase the rate of efficiency improvement. A technologically based model in this case indirectly makes the energy efficiency endogenous in the aggregate energy-economy model.......This paper discusses different approaches to incorporating energy technologies and technological development in energy-economic models. Technological development is a very important issue in long-term energy demand projections and in environmental analyses. Different assumptions on technological...... development are one of the main causes for the very diverging results which have been obtained using bottom-up and top-down models for analysing the costs of greenhouse gas mitigation. One of the objectives for studies comparing model results have been to create comparable model assumptions regarding...

  17. An Efficient Multitask Scheduling Model for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Hongsheng Yin

    2014-01-01

    Full Text Available The sensor nodes of multitask wireless network are constrained in performance-driven computation. Theoretical studies on the data processing model of wireless sensor nodes suggest satisfying the requirements of high qualities of service (QoS of multiple application networks, thus improving the efficiency of network. In this paper, we present the priority based data processing model for multitask sensor nodes in the architecture of multitask wireless sensor network. The proposed model is deduced with the M/M/1 queuing model based on the queuing theory where the average delay of data packets passing by sensor nodes is estimated. The model is validated with the real data from the Huoerxinhe Coal Mine. By applying the proposed priority based data processing model in the multitask wireless sensor network, the average delay of data packets in a sensor nodes is reduced nearly to 50%. The simulation results show that the proposed model can improve the throughput of network efficiently.

  18. Comparison of different efficiency criteria for hydrological model assessment

    Directory of Open Access Journals (Sweden)

    P. Krause

    2005-01-01

    Full Text Available The evaluation of hydrologic model behaviour and performance is commonly made and reported through comparisons of simulated and observed variables. Frequently, comparisons are made between simulated and measured streamflow at the catchment outlet. In distributed hydrological modelling approaches, additional comparisons of simulated and observed measurements for multi-response validation may be integrated into the evaluation procedure to assess overall modelling performance. In both approaches, single and multi-response, efficiency criteria are commonly used by hydrologists to provide an objective assessment of the "closeness" of the simulated behaviour to the observed measurements. While there are a few efficiency criteria such as the Nash-Sutcliffe efficiency, coefficient of determination, and index of agreement that are frequently used in hydrologic modeling studies and reported in the literature, there are a large number of other efficiency criteria to choose from. The selection and use of specific efficiency criteria and the interpretation of the results can be a challenge for even the most experienced hydrologist since each criterion may place different emphasis on different types of simulated and observed behaviours. In this paper, the utility of several efficiency criteria is investigated in three examples using a simple observed streamflow hydrograph.

  19. Efficient Modelling and Generation of Markov Automata (extended version)

    NARCIS (Netherlands)

    Timmer, Mark; Katoen, Joost-Pieter; Pol, van de Jaco; Stoelinga, Mariëlle

    2012-01-01

    This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the M

  20. Evaluating energy efficiency policies with energy-economy models

    NARCIS (Netherlands)

    Mundaca, L.; Neij, L.; Worrell, E.; McNeil, M.

    2010-01-01

    The growing complexities of energy systems, environmental problems, and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyz

  1. Toward an Efficient Prediction of Solar Flares: Which Parameters, and How?

    Directory of Open Access Journals (Sweden)

    Manolis K. Georgoulis

    2013-11-01

    Full Text Available Solar flare prediction has become a forefront topic in contemporary solar physics, with numerous published methods relying on numerous predictive parameters, that can even be divided into parameter classes. Attempting further insight, we focus on two popular classes of flare-predictive parameters, namely multiscale (i.e., fractal and multifractal and proxy (i.e., morphological parameters, and we complement our analysis with a study of the predictive capability of fundamental physical parameters (i.e., magnetic free energy and relative magnetic helicity. Rather than applying the studied parameters to a comprehensive statistical sample of flaring and non-flaring active regions, that was the subject of our previous studies, the novelty of this work is their application to an exceptionally long and high-cadence time series of the intensely eruptive National Oceanic and Atmospheric Administration (NOAA active region (AR 11158, observed by the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory. Aiming for a detailed study of the temporal evolution of each parameter, we seek distinctive patterns that could be associated with the four largest flares in the AR in the course of its five-day observing interval. We find that proxy parameters only tend to show preflare impulses that are practical enough to warrant subsequent investigation with sufficient statistics. Combining these findings with previous results, we conclude that: (i carefully constructed, physically intuitive proxy parameters may be our best asset toward an efficient future flare-forecasting; and (ii the time series of promising parameters may be as important as their instantaneous values. Value-based prediction is the only approach followed so far. Our results call for novel signal and/or image processing techniques to efficiently utilize combined amplitude and temporal-profile information to optimize the inferred solar-flare probabilities.

  2. Prediction Model of Battery State of Charge and Control Parameter Optimization for Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Bambang Wahono

    2015-07-01

    Full Text Available This paper presents the construction of a battery state of charge (SOC prediction model and the optimization method of the said model to appropriately control the number of parameters in compliance with the SOC as the battery output objectives. Research Centre for Electrical Power and Mechatronics, Indonesian Institute of Sciences has tested its electric vehicle research prototype on the road, monitoring its voltage, current, temperature, time, vehicle velocity, motor speed, and SOC during the operation. Using this experimental data, the prediction model of battery SOC was built. Stepwise method considering multicollinearity was able to efficiently develops the battery prediction model that describes the multiple control parameters in relation to the characteristic values such as SOC. It was demonstrated that particle swarm optimization (PSO succesfully and efficiently calculated optimal control parameters to optimize evaluation item such as SOC based on the model.

  3. Gstat: a program for geostatistical modelling, prediction and simulation

    Science.gov (United States)

    Pebesma, Edzer J.; Wesseling, Cees G.

    1998-01-01

    Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.

  4. Unsteady Fast Random Particle Mesh method for efficient prediction of tonal and broadband noises of a centrifugal fan unit

    Directory of Open Access Journals (Sweden)

    Seung Heo

    2015-09-01

    Full Text Available In this study, efficient numerical method is proposed for predicting tonal and broadband noises of a centrifugal fan unit. The proposed method is based on Hybrid Computational Aero-Acoustic (H-CAA techniques combined with Unsteady Fast Random Particle Mesh (U-FRPM method. The U-FRPM method is developed by extending the FRPM method proposed by Ewert et al. and is utilized to synthesize turbulence flow field from unsteady RANS solutions. The H-CAA technique combined with U-FRPM method is applied to predict broadband as well as tonal noises of a centrifugal fan unit in a household refrigerator. Firstly, unsteady flow field driven by a rotating fan is computed by solving the RANS equations with Computational Fluid Dynamic (CFD techniques. Main source regions around the rotating fan are identified by examining the computed flow fields. Then, turbulence flow fields in the main source regions are synthesized by applying the U-FRPM method. The acoustic analogy is applied to model acoustic sources in the main source regions. Finally, the centrifugal fan noise is predicted by feeding the modeled acoustic sources into an acoustic solver based on the Boundary Element Method (BEM. The sound spectral levels predicted using the current numerical method show good agreements with the measured spectra at the Blade Pass Frequencies (BPFs as well as in the high frequency range. On the more, the present method enables quantitative assessment of relative contributions of identified source regions to the sound field by comparing predicted sound pressure spectrum due to modeled sources.

  5. Validation of an Efficient Outdoor Sound Propagation Model Using BEM

    DEFF Research Database (Denmark)

    Quirós-Alpera, S.; Henriquez, Vicente Cutanda; Jacobsen, Finn

    2001-01-01

    An approximate, simple and practical model for prediction of outdoor sound propagation exists based on ray theory, diffraction theory and Fresnel-zone considerations [1]. This model, which can predict sound propagation over non-flat terrain, has been validated for combinations of flat ground, hills...... and barriers, but it still needs to be validated for configurations that involve combinations of valleys and barriers. In order to do this a boundary element model has been implemented in MATLAB to serve as a reliable reference....

  6. Functional Testing Protocols for Commercial Building Efficiency Baseline Modeling Software

    Energy Technology Data Exchange (ETDEWEB)

    Jump, David; Price, Phillip N.; Granderson, Jessica; Sohn, Michael

    2013-09-06

    This document describes procedures for testing and validating proprietary baseline energy modeling software accuracy in predicting energy use over the period of interest, such as a month or a year. The procedures are designed according to the methodology used for public domain baselining software in another LBNL report that was (like the present report) prepared for Pacific Gas and Electric Company: ?Commercial Building Energy Baseline Modeling Software: Performance Metrics and Method Testing with Open Source Models and Implications for Proprietary Software Testing Protocols? (referred to here as the ?Model Analysis Report?). The test procedure focuses on the quality of the software?s predictions rather than on the specific algorithms used to predict energy use. In this way the software vendor is not required to divulge or share proprietary information about how their software works, while enabling stakeholders to assess its performance.

  7. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  8. Electric vehicle charge planning using Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Poulsen, Niels K.; Madsen, Henrik

    2012-01-01

    Economic Model Predictive Control (MPC) is very well suited for controlling smart energy systems since electricity price and demand forecasts are easily integrated in the controller. Electric vehicles (EVs) are expected to play a large role in the future Smart Grid. They are expected to provide g...... should be consumed as soon as it is produced to avoid the need for energy storage as this is expensive, limited and introduces efficiency losses. The Economic MPC for EVs described in this paper may contribute to facilitating transition to a fossil free energy system.......Economic Model Predictive Control (MPC) is very well suited for controlling smart energy systems since electricity price and demand forecasts are easily integrated in the controller. Electric vehicles (EVs) are expected to play a large role in the future Smart Grid. They are expected to provide...... grid services, both for peak reduction and for ancillary services, by absorbing short term variations in the electricity production. In this paper the Economic MPC minimizes the cost of electricity consumption for a single EV. Simulations show savings of 50–60% of the electricity costs compared...

  9. Predictive Model of Graphene Based Polymer Nanocomposites: Electrical Performance

    Science.gov (United States)

    Manta, Asimina; Gresil, Matthieu; Soutis, Constantinos

    2017-04-01

    In this computational work, a new simulation tool on the graphene/polymer nanocomposites electrical response is developed based on the finite element method (FEM). This approach is built on the multi-scale multi-physics format, consisting of a unit cell and a representative volume element (RVE). The FE methodology is proven to be a reliable and flexible tool on the simulation of the electrical response without inducing the complexity of raw programming codes, while it is able to model any geometry, thus the response of any component. This characteristic is supported by its ability in preliminary stage to predict accurately the percolation threshold of experimental material structures and its sensitivity on the effect of different manufacturing methodologies. Especially, the percolation threshold of two material structures of the same constituents (PVDF/Graphene) prepared with different methods was predicted highlighting the effect of the material preparation on the filler distribution, percolation probability and percolation threshold. The assumption of the random filler distribution was proven to be efficient on modelling material structures obtained by solution methods, while the through-the -thickness normal particle distribution was more appropriate for nanocomposites constructed by film hot-pressing. Moreover, the parametrical analysis examine the effect of each parameter on the variables of the percolation law. These graphs could be used as a preliminary design tool for more effective material system manufacturing.

  10. PEB: thermal oriented architectural modeling for building energy efficiency regulations

    OpenAIRE

    Leclercq, Pierre; Juchmes, Roland; Delfosse, Vincent; Safin, Stéphane; Dawans, Arnaud; Dawans, Adrien

    2011-01-01

    As part of the overhauling of the building energy efficiency regulations (following European directive 2002/91/CE), the Wallonia and Brussels-Capital Region commissioned the LUCID to develop an optional 3D graphic encoding module to be integrated with the core energy efficiency computation engine developed by Altran Europe. Our contribution consisted mostly in analyzing the target users’ needs and representations (ergonomics, UI, interactions) and implementing a bespoke 3D CAD modeler dedicat...

  11. Robust and efficient designs for the Michaelis-Menten model

    OpenAIRE

    Dette, Holger; Biedermann, Stefanie

    2002-01-01

    For the Michaelis-Menten model, we determine designs that maximize the minimum of the D-efficiencies over a certain interval for the nonlinear parameter. The best two point designs can be found explicitly, and a characterization is given when these designs are optimal within the class of all designs. In most cases of practical interest, the determined designs are highly efficient and robust with respect to misspecification of the nonlinear parameter. The results are illustrated and applied in...

  12. Predictability in models of the atmospheric circulation.

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error are. The

  13. MULTIFACTOR ECONOMETRIC MODELS FOR ENERGY EFFICIENCY IN THE EU

    Directory of Open Access Journals (Sweden)

    Gheorghe ZAMAN

    2007-06-01

    Full Text Available The present paper is approaching the energy efficiency topic from the viewpoint of its trends and influence factors, in the context of requirements, criteria and principles of sustainable development. Energy efficiency is measured as ratio of GDP and energy use and its multiple factors of influence are considered. With a view to deducing some conclusions of theoretical-methodological but also of practical-applicative character, we are researching the variation in energy efficiency in European Union, but also in the case of new candidates and other countries, by means of multifactor econometric modeling.

  14. Simplified Predictive Models for CO2 Sequestration Performance Assessment

    Science.gov (United States)

    Mishra, Srikanta; RaviGanesh, Priya; Schuetter, Jared; Mooney, Douglas; He, Jincong; Durlofsky, Louis

    2014-05-01

    We present results from an ongoing research project that seeks to develop and validate a portfolio of simplified modeling approaches that will enable rapid feasibility and risk assessment for CO2 sequestration in deep saline formation. The overall research goal is to provide tools for predicting: (a) injection well and formation pressure buildup, and (b) lateral and vertical CO2 plume migration. Simplified modeling approaches that are being developed in this research fall under three categories: (1) Simplified physics-based modeling (SPM), where only the most relevant physical processes are modeled, (2) Statistical-learning based modeling (SLM), where the simulator is replaced with a "response surface", and (3) Reduced-order method based modeling (RMM), where mathematical approximations reduce the computational burden. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. In the first category (SPM), we use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. In the second category (SLM), we develop statistical "proxy models" using the simulation domain described previously with two different approaches: (a) classical Box-Behnken experimental design with a quadratic response surface fit, and (b) maximin Latin Hypercube sampling (LHS) based design with a Kriging metamodel fit using a quadratic trend and Gaussian correlation structure. For roughly the same number of

  15. Transport efficiency and workload distribution in a mathematical model of the thick ascending limb.

    Science.gov (United States)

    Nieves-González, Aniel; Clausen, Chris; Layton, Anita T; Layton, Harold E; Moore, Leon C

    2013-03-15

    The thick ascending limb (TAL) is a major NaCl reabsorbing site in the nephron. Efficient reabsorption along that segment is thought to be a consequence of the establishment of a strong transepithelial potential that drives paracellular Na(+) uptake. We used a multicell mathematical model of the TAL to estimate the efficiency of Na(+) transport along the TAL and to examine factors that determine transport efficiency, given the condition that TAL outflow must be adequately dilute. The TAL model consists of a series of epithelial cell models that represent all major solutes and transport pathways. Model equations describe luminal flows, based on mass conservation and electroneutrality constraints. Empirical descriptions of cell volume regulation (CVR) and pH control were implemented, together with the tubuloglomerular feedback (TGF) system. Transport efficiency was calculated as the ratio of total net Na(+) transport (i.e., paracellular and transcellular transport) to transcellular Na(+) transport. Model predictions suggest that 1) the transepithelial Na(+) concentration gradient is a major determinant of transport efficiency; 2) CVR in individual cells influences the distribution of net Na(+) transport along the TAL; 3) CVR responses in conjunction with TGF maintain luminal Na(+) concentration well above static head levels in the cortical TAL, thereby preventing large decreases in transport efficiency; and 4) under the condition that the distribution of Na(+) transport along the TAL is quasi-uniform, the tubular fluid axial Cl(-) concentration gradient near the macula densa is sufficiently steep to yield a TGF gain consistent with experimental data.

  16. An efficient sampling algorithm for uncertain abnormal data detection in biomedical image processing and disease prediction.

    Science.gov (United States)

    Liu, Fei; Zhang, Xi; Jia, Yan

    2015-01-01

    In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top (k1,k2) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space, where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top (k1,k2) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods' high accuracy and high efficiency.

  17. Controlled formation of polymer nanocapsules with high diffusion-barrier properties and prediction of encapsulation efficiency.

    Science.gov (United States)

    Hofmeister, Ines; Landfester, Katharina; Taden, Andreas

    2015-01-02

    Polymer nanocapsules with high diffusion-barrier performance were designed following simple thermodynamic considerations. Hindered diffusion of the enclosed material leads to high encapsulation efficiencies (EEs), which was demonstrated based on the encapsulation of highly volatile compounds of different chemical natures. Low interactions between core and shell materials are key factors to achieve phase separation and a high diffusion barrier of the resulting polymeric shell. These interactions can be characterized and quantified using the Hansen solubility parameters. A systematic study of our copolymer system revealed a linear relationship between the Hansen parameter for hydrogen bonding (δh ) and encapsulation efficiencies which enables the prediction of encapsulated amounts for any material. Furthermore EEs of poorly encapsulated materials can be increased by mixing them with a mediator compound to give lower overall δh values.

  18. Flow simulation and efficiency hill chart prediction for a Propeller turbine

    Energy Technology Data Exchange (ETDEWEB)

    Vu, T C; Gauthier, M [Andritz Hydro Ltd. 6100 Transcanadienne, Pointe Claire, H9R 1B9 (Canada); Koller, M [Andritz Hydro AG Hardstrasse 319, 8021 Zuerich (Switzerland); Deschenes, C, E-mail: thi.vu@andritz.co, E-mail: maxime.gauthier@andritz.co [Laval University, Laboratory of Hydraulic Machinery (LAMH) 1065 Avenue de la Medecine, Quebec, G1V 0A6 (Canada)

    2010-08-15

    In the present paper, we focus on the flow computation of a low head Propeller turbine at a wide range of design and off-design operating conditions. First, we will present the results on the efficiency hill chart prediction of the Propeller turbine and discuss the consequences of using non-homologous blade geometries for the CFD simulation. The flow characteristics of the entire turbine will be also investigated and compared with experimental data at different measurement planes. Two operating conditions are selected, the first one at the best efficiency point and the second one at part load condition. At the same time, for the same selected operating points, the numerical results for the entire turbine simulation will be compared with flow simulation with our standard stage calculation approach which includes only guide vane, runner and draft tube geometries.

  19. Prediction Uncertainty Analyses for the Combined Physically-Based and Data-Driven Models

    Science.gov (United States)

    Demissie, Y. K.; Valocchi, A. J.; Minsker, B. S.; Bailey, B. A.

    2007-12-01

    The unavoidable simplification associated with physically-based mathematical models can result in biased parameter estimates and correlated model calibration errors, which in return affect the accuracy of model predictions and the corresponding uncertainty analyses. In this work, a physically-based groundwater model (MODFLOW) together with error-correcting artificial neural networks (ANN) are used in a complementary fashion to obtain an improved prediction (i.e. prediction with reduced bias and error correlation). The associated prediction uncertainty of the coupled MODFLOW-ANN model is then assessed using three alternative methods. The first method estimates the combined model confidence and prediction intervals using first-order least- squares regression approximation theory. The second method uses Monte Carlo and bootstrap techniques for MODFLOW and ANN, respectively, to construct the combined model confidence and prediction intervals. The third method relies on a Bayesian approach that uses analytical or Monte Carlo methods to derive the intervals. The performance of these approaches is compared with Generalized Likelihood Uncertainty Estimation (GLUE) and Calibration-Constrained Monte Carlo (CCMC) intervals of the MODFLOW predictions alone. The results are demonstrated for a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study comprises structural, parameter, and measurement uncertainties. The preliminary results indicate that the proposed three approaches yield comparable confidence and prediction intervals, thus making the computationally efficient first-order least-squares regression approach attractive for estimating the coupled model uncertainty. These results will be compared with GLUE and CCMC results.

  20. Interrelationships between trait anxiety, situational stress and mental effort predict phonological processing efficiency, but not effectiveness.

    Science.gov (United States)

    Edwards, Elizabeth J; Edwards, Mark S; Lyvers, Michael

    2016-08-01

    Attentional control theory (ACT) describes the mechanisms associated with the relationship between anxiety and cognitive performance. We investigated the relationship between cognitive trait anxiety, situational stress and mental effort on phonological performance using a simple (forward-) and complex (backward-) word span task. Ninety undergraduate students participated in the study. Predictor variables were cognitive trait anxiety, indexed using questionnaire scores; situational stress, manipulated using ego threat instructions; and perceived level of mental effort, measured using a visual analogue scale. Criterion variables (a) performance effectiveness (accuracy) and (b) processing efficiency (accuracy divided by response time) were analyzed in separate multiple moderated-regression analyses. The results revealed (a) no relationship between the predictors and performance effectiveness, and (b) a significant 3-way interaction on processing efficiency for both the simple and complex tasks, such that at higher effort, trait anxiety and situational stress did not predict processing efficiency, whereas at lower effort, higher trait anxiety was associated with lower efficiency at high situational stress, but not at low situational stress. Our results were in full support of the assumptions of ACT and implications for future research are discussed. (PsycINFO Database Record

  1. Geometric correlations in real multiplex networks: multidimensional communities, trans-layer link prediction, and efficient navigation

    CERN Document Server

    Kleineberg, Kaj-Kolja; Serrano, M Angeles; Papadopoulos, Fragkiskos

    2016-01-01

    Real networks often form interacting parts of larger and more complex systems. Examples can be found in different domains, ranging from the Internet to structural and functional brain networks. Here, we show that these multiplex systems are not random combinations of single network layers. Instead, they are organized in specific ways dictated by hidden geometric correlations between the individual layers. We find that these correlations are strong in different real multiplexes, and form a key framework for answering many important questions. Specifically, we show that these geometric correlations facilitate: (i) the definition and detection of multidimensional communities, which are sets of nodes that are simultaneously similar in multiple layers; (ii) accurate trans-layer link prediction, where connections in one layer can be predicted by observing the hidden geometric space of another layer; and (iii) efficient targeted navigation in the multilayer system using only local knowledge, which outperforms naviga...

  2. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  3. A prediction model for assessing residential radon concentration in Switzerland

    NARCIS (Netherlands)

    Hauri, D.D.; Huss, A.; Zimmermann, F.; Kuehni, C.E.; Roosli, M.

    2012-01-01

    Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the

  4. Digestive efficiency mediated by serum calcium predicts bone mineral density in the common marmoset (Callithrix jacchus).

    Science.gov (United States)

    Jarcho, Michael R; Power, Michael L; Layne-Colon, Donna G; Tardif, Suzette D

    2013-02-01

    Two health problems have plagued captive common marmoset (Callithrix jacchus) colonies for nearly as long as those colonies have existed: marmoset wasting syndrome and metabolic bone disease. While marmoset wasting syndrome is explicitly linked to nutrient malabsorption, we propose metabolic bone disease is also linked to nutrient malabsorption, although indirectly. If animals experience negative nutrient balance chronically, critical nutrients may be taken from mineral stores such as the skeleton, thus leaving those stores depleted. We indirectly tested this prediction through an initial investigation of digestive efficiency, as measured by apparent energy digestibility, and serum parameters known to play a part in metabolic bone mineral density of captive common marmoset monkeys. In our initial study on 12 clinically healthy animals, we found a wide range of digestive efficiencies, and subjects with lower digestive efficiency had lower serum vitamin D despite having higher food intakes. A second experiment on 23 subjects including several with suspected bone disease was undertaken to measure digestive and serum parameters, with the addition of a measure of bone mineral density by dual-energy X-ray absorptiometry (DEXA). Bone mineral density was positively associated with apparent digestibility of energy, vitamin D, and serum calcium. Further, digestive efficiency was found to predict bone mineral density when mediated by serum calcium. These data indicate that a poor ability to digest and absorb nutrients leads to calcium and vitamin D insufficiency. Vitamin D absorption may be particularly critical for indoor-housed animals, as opposed to animals in a more natural setting, because vitamin D that would otherwise be synthesized via exposure to sunlight must be absorbed from their diet. If malabsorption persists, metabolic bone disease is a possible consequence in common marmosets. These findings support our hypothesis that both wasting syndrome and metabolic bone

  5. Toward efficient riparian restoration: integrating economic, physical, and biological models.

    Science.gov (United States)

    Watanabe, Michio; Adams, Richard M; Wu, Junjie; Bolte, John P; Cox, Matt M; Johnson, Sherri L; Liss, William J; Boggess, William G; Ebersole, Joseph L

    2005-04-01

    This paper integrates economic, biological, and physical models to explore the efficient combination and spatial allocation of conservation efforts to protect water quality and increase salmonid populations in the Grande Ronde basin, Oregon. We focus on the effects of shade on water temperatures and the subsequent impacts on endangered juvenile salmonid populations. The integrated modeling system consists of a physical model that links riparian conditions and hydrological characteristics to water temperature; a biological model that links water temperature and riparian conditions to salmonid abundance, and an economic model that incorporates both physical and biological models to estimate minimum cost allocations of conservation efforts. Our findings indicate that conservation alternatives such as passive and active riparian restoration, the width of riparian restoration zones, and the types of vegetation used in restoration activities should be selected based on the spatial distribution of riparian characteristics in the basin. The relative effectiveness of passive and active restoration plays an important role in determining the efficient allocations of conservation efforts. The time frame considered in the restoration efforts and the magnitude of desired temperature reductions also affect the efficient combinations of restoration activities. If the objective of conservation efforts is to maximize fish populations, then fishery benefits should be directly targeted. Targeting other criterion such as water temperatures would result in different allocations of conservation efforts, and therefore are not generally efficient.

  6. Distributional Analysis for Model Predictive Deferrable Load Control

    OpenAIRE

    Chen, Niangjun; Gan, Lingwen; Low, Steven H.; Wierman, Adam

    2014-01-01

    Deferrable load control is essential for handling the uncertainties associated with the increasing penetration of renewable generation. Model predictive control has emerged as an effective approach for deferrable load control, and has received considerable attention. In particular, previous work has analyzed the average-case performance of model predictive deferrable load control. However, to this point, distributional analysis of model predictive deferrable load control has been elusive. In ...

  7. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  8. Tailored high-resolution numerical weather forecasts for energy efficient predictive building control

    Science.gov (United States)

    Stauch, V. J.; Gwerder, M.; Gyalistras, D.; Oldewurtel, F.; Schubiger, F.; Steiner, P.

    2010-09-01

    The high proportion of the total primary energy consumption by buildings has increased the public interest in the optimisation of buildings' operation and is also driving the development of novel control approaches for the indoor climate. In this context, the use of weather forecasts presents an interesting and - thanks to advances in information and predictive control technologies and the continuous improvement of numerical weather prediction (NWP) models - an increasingly attractive option for improved building control. Within the research project OptiControl (www.opticontrol.ethz.ch) predictive control strategies for a wide range of buildings, heating, ventilation and air conditioning (HVAC) systems, and representative locations in Europe are being investigated with the aid of newly developed modelling and simulation tools. Grid point predictions for radiation, temperature and humidity of the high-resolution limited area NWP model COSMO-7 (see www.cosmo-model.org) and local measurements are used as disturbances and inputs into the building system. The control task considered consists in minimizing energy consumption whilst maintaining occupant comfort. In this presentation, we use the simulation-based OptiControl methodology to investigate the impact of COSMO-7 forecasts on the performance of predictive building control and the resulting energy savings. For this, we have selected building cases that were shown to benefit from a prediction horizon of up to 3 days and therefore, are particularly suitable for the use of numerical weather forecasts. We show that the controller performance is sensitive to the quality of the weather predictions, most importantly of the incident radiation on differently oriented façades. However, radiation is characterised by a high temporal and spatial variability in part caused by small scale and fast changing cloud formation and dissolution processes being only partially represented in the COSMO-7 grid point predictions. On the

  9. ASYMPTOTIC EFFICIENT ESTIMATION IN SEMIPARAMETRIC NONLINEAR REGRESSION MODELS

    Institute of Scientific and Technical Information of China (English)

    ZhuZhongyi; WeiBocheng

    1999-01-01

    In this paper, the estimation method based on the “generalized profile likelihood” for the conditionally parametric models in the paper given by Severini and Wong (1992) is extendedto fixed design semiparametrie nonlinear regression models. For these semiparametrie nonlinear regression models,the resulting estimator of parametric component of the model is shown to beasymptotically efficient and the strong convergence rate of nonparametric component is investigated. Many results (for example Chen (1988) ,Gao & Zhao (1993), Rice (1986) et al. ) are extended to fixed design semiparametric nonlinear regression models.

  10. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  11. On hydrological model complexity, its geometrical interpretations and prediction uncertainty

    NARCIS (Netherlands)

    Arkesteijn, E.C.M.M.; Pande, S.

    2013-01-01

    Knowledge of hydrological model complexity can aid selection of an optimal prediction model out of a set of available models. Optimal model selection is formalized as selection of the least complex model out of a subset of models that have lower empirical risk. This may be considered equivalent to

  12. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...... studies on bankruptcy detection, seldom probabilistic approaches were carried out. In this paper we assume a probabilistic point-of-view by applying Gaussian Processes (GP) in the context of bankruptcy prediction, comparing it against the Support Vector Machines (SVM) and the Logistic Regression (LR......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...

  13. Predictive modeling of dental pain using neural network.

    Science.gov (United States)

    Kim, Eun Yeob; Lim, Kun Ok; Rhee, Hyun Sill

    2009-01-01

    The mouth is a part of the body for ingesting food that is the most basic foundation and important part. The dental pain predicted by the neural network model. As a result of making a predictive modeling, the fitness of the predictive modeling of dental pain factors was 80.0%. As for the people who are likely to experience dental pain predicted by the neural network model, preventive measures including proper eating habits, education on oral hygiene, and stress release must precede any dental treatment.

  14. Development of a computationally efficient urban modeling approach

    DEFF Research Database (Denmark)

    Wolfs, Vincent; Murla, Damian; Ntegeka, Victor;

    2016-01-01

    This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can...... be adjust-ed, allowing the modeller to focus on flood-prone locations. This results in efficiently parameterized models that can be tailored to applications. The simulated flood levels are transformed into flood extent maps using a high resolution (0.5-meter) digital terrain model in GIS. To illustrate...... the developed methodology, a case study for the city of Ghent in Belgium is elaborated. The configured conceptual model mimics the flood levels of a detailed 1D-2D hydrodynamic InfoWorks ICM model accurately, while the calculation time is an order of magnitude of 106 times shorter than the original highly...

  15. Efficient pairwise RNA structure prediction using probabilistic alignment constraints in Dynalign

    Directory of Open Access Journals (Sweden)

    Sharma Gaurav

    2007-04-01

    Full Text Available Abstract Background Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction. Results The proposed technique eliminates manual parameter selection in Dynalign and provides significant computational time savings in comparison to prior constraints in Dynalign while simultaneously providing a small improvement in the structural prediction accuracy. Savings are also realized in memory. In experiments over a 5S RNA dataset with average sequence length of approximately 120 nucleotides, the method reduces computation by a factor of 2. The method performs favorably in comparison to other programs for pairwise RNA structure prediction: yielding better accuracy, on average, and requiring significantly lesser computational resources. Conclusion Probabilistic analysis can be utilized in order to automate the determination of alignment constraints for

  16. A ranking efficiency unit by restrictions using DEA models

    Science.gov (United States)

    Arsad, Roslah; Abdullah, Mohammad Nasir; Alias, Suriana

    2014-12-01

    In this paper, a comparison regarding the efficiency shares of listed companies in Bursa Malaysia was made, through the application of estimation method of Data Envelopment Analysis (DEA). In this study, DEA is used to measure efficiency shares of listed companies in Bursa Malaysia in terms of the financial performance. It is believed that only good financial performer will give a good return to the investors in the long run. The main objectives were to compute the relative efficiency scores of the shares in Bursa Malaysia and rank the shares based on Balance Index with regard to relative efficiency. The methods of analysis using Alirezaee and Afsharian's model were employed to this study; where the originality of Charnes, Cooper and Rhode model (CCR) with assumption of constant return to scale (CRS) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by using Balance Index. From the result, the companies that were recommended for investors based on ranking were NATWIDE, YTL and MUDA. These companies were the top three efficient companies with good performance in 2011 whereas in 2012 the top three companies were NATWIDE, MUDA and BERNAS.

  17. Economic advantages of applying model predictive control to distributed energy resources: The case of micro-CHP systems

    NARCIS (Netherlands)

    Houwing, M.; Negenborn, R.R.; De Schutter, B.

    2008-01-01

    The increasing presence of distributed energy resources, information and intelligence in the electricity infrastructure increases the possibilities for larger economic efficiency of power systems. This work shows the possible cost advantages of applying a model predictive control (MPC) strategy to

  18. A Combined Cooperative Braking Model with a Predictive Control Strategy in an Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Hongqiang Guo

    2013-12-01

    Full Text Available Cooperative braking with regenerative braking and mechanical braking plays an important role in electric vehicles for energy-saving control. Based on the parallel and the series cooperative braking models, a combined model with a predictive control strategy to get a better cooperative braking performance is presented. The balance problem between the maximum regenerative energy recovery efficiency and the optimum braking stability is solved through an off-line process optimization stream with the collaborative optimization algorithm (CO. To carry out the process optimization stream, the optimal Latin hypercube design (Opt LHD is presented to discrete the continuous design space. To solve the poor real-time problem of the optimization, a high-precision predictive model based on the off-line optimization data of the combined model is built, and a predictive control strategy is proposed and verified through simulation. The simulation results demonstrate that the predictive control strategy and the combined model are reasonable and effective.

  19. Quantifying the predictive consequences of model error with linear subspace analysis

    Science.gov (United States)

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  20. EFFICIENCY AND COST MODELLING OF THERMAL POWER PLANTS

    Directory of Open Access Journals (Sweden)

    Péter Bihari

    2010-01-01

    Full Text Available The proper characterization of energy suppliers is one of the most important components in the modelling of the supply/demand relations of the electricity market. Power generation capacity i. e. power plants constitute the supply side of the relation in the electricity market. The supply of power stations develops as the power stations attempt to achieve the greatest profit possible with the given prices and other limitations. The cost of operation and the cost of load increment are thus the most important characteristics of their behaviour on the market. In most electricity market models, however, it is not taken into account that the efficiency of a power station also depends on the level of the load, on the type and age of the power plant, and on environmental considerations. The trade in electricity on the free market cannot rely on models where these essential parameters are omitted. Such an incomplete model could lead to a situation where a particular power station would be run either only at its full capacity or else be entirely deactivated depending on the prices prevailing on the free market. The reality is rather that the marginal cost of power generation might also be described by a function using the efficiency function. The derived marginal cost function gives the supply curve of the power station. The load level dependent efficiency function can be used not only for market modelling, but also for determining the pollutant and CO2 emissions of the power station, as well as shedding light on the conditions for successfully entering the market. Based on the measurement data our paper presents mathematical models that might be used for the determination of the load dependent efficiency functions of coal, oil, or gas fuelled power stations (steam turbine, gas turbine, combined cycle and IC engine based combined heat and power stations. These efficiency functions could also contribute to modelling market conditions and determining the

  1. Development of an efficient coupled model for soil–atmosphere modelling (FHAVeT: model evaluation and comparison

    Directory of Open Access Journals (Sweden)

    A.-J. Tinet

    2014-07-01

    Full Text Available In agricultural management, a good timing in operations such as irrigation or sowing, is essential to enhance both economical and environmental performance. To improve such timing, predictive software are of particular interest. An optimal decision making software would require process modules which provides robust, efficient and accurate predictions while being based on a minimal amount of parameters easily available. This paper develops a coupled soil–atmosphere model based on Ross fast solution for Richards' equation, heat transfer and detailed surface energy balance. In this paper, the developed model, FHAVeT (Fast Hydro Atmosphere Vegetation Temperature, has been evaluated in bare soil conditions against the coupled model based of the De Vries description, TEC. The two models were compared for different climatic and soil conditions. Moreover, the model allows the use of various pedotransfer functions. The FHAVeT model showed better performance in regards to mass balance. In order to allow a more precise comparison, 6 time windows were selected. The study demonstrated that the FHAVeT behaviour is quite similar to the TEC behaviour except under some dry conditions. An evaluation of day detection in regards to moisture thresholds is performed.

  2. Prediction of peptide bonding affinity: kernel methods for nonlinear modeling

    CERN Document Server

    Bergeron, Charles; Sundling, C Matthew; Krein, Michael; Katt, Bill; Sukumar, Nagamani; Breneman, Curt M; Bennett, Kristin P

    2011-01-01

    This paper presents regression models obtained from a process of blind prediction of peptide binding affinity from provided descriptors for several distinct datasets as part of the 2006 Comparative Evaluation of Prediction Algorithms (COEPRA) contest. This paper finds that kernel partial least squares, a nonlinear partial least squares (PLS) algorithm, outperforms PLS, and that the incorporation of transferable atom equivalent features improves predictive capability.

  3. Predicting High or Low Transfer Efficiency of Photovoltaic Systems Using a Novel Hybrid Methodology Combining Rough Set Theory, Data Envelopment Analysis and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lee-Ing Tong

    2012-02-01

    Full Text Available Solar energy has become an important energy source in recent years as it generates less pollution than other energies. A photovoltaic (PV system, which typically has many components, converts solar energy into electrical energy. With the development of advanced engineering technologies, the transfer efficiency of a PV system has been increased from low to high. The combination of components in a PV system influences its transfer efficiency. Therefore, when predicting the transfer efficiency of a PV system, one must consider the relationship among system components. This work accurately predicts whether transfer efficiency of a PV system is high or low using a novel hybrid model that combines rough set theory (RST, data envelopment analysis (DEA, and genetic programming (GP. Finally, real data-set are utilized to demonstrate the accuracy of the proposed method.

  4. An optimality framework to predict decomposer carbon-use efficiency trends along stoichiometric gradients

    Science.gov (United States)

    Manzoni, S.; Capek, P.; Mooshammer, M.; Lindahl, B.; Richter, A.; Santruckova, H.

    2016-12-01

    Litter and soil organic matter decomposers feed on substrates with much wider C:N and C:P ratios then their own cellular composition, raising the question as to how they can adapt their metabolism to such a chronic stoichiometric imbalance. Here we propose an optimality framework to address this question, based on the hypothesis that carbon-use efficiency (CUE) can be optimally adjusted to maximize the decomposer growth rate. When nutrients are abundant, increasing CUE improves decomposer growth rate, at the expense of higher nutrient demand. However, when nutrients are scarce, increased nutrient demand driven by high CUE can trigger nutrient limitation and inhibit growth. An intermediate, `optimal' CUE ensures balanced growth at the verge of nutrient limitation. We derive a simple analytical equation that links this optimal CUE to organic substrate and decomposer biomass C:N and C:P ratios, and to the rate of inorganic nutrient supply (e.g., fertilization). This equation allows formulating two specific hypotheses: i) decomposer CUE should increase with widening organic substrate C:N and C:P ratios with a scaling exponent between 0 (with abundant inorganic nutrients) and -1 (scarce inorganic nutrients), and ii) CUE should increase with increasing inorganic nutrient supply, for a given organic substrate stoichiometry. These hypotheses are tested using a new database encompassing nearly 2000 estimates of CUE from about 160 studies, spanning aquatic and terrestrial decomposers of litter and more stabilized organic matter. The theoretical predictions are largely confirmed by our data analysis, except for the lack of fertilization effects on terrestrial decomposer CUE. While stoichiometric drivers constrain the general trends in CUE, the relatively large variability in CUE estimates suggests that other factors could be at play as well. For example, temperature is often cited as a potential driver of CUE, but we only found limited evidence of temperature effects

  5. An Application on Merton Model in the Non-efficient Market

    Science.gov (United States)

    Feng, Yanan; Xiao, Qingxian

    Merton Model is one of the famous credit risk models. This model presumes that the only source of uncertainty in equity prices is the firm’s net asset value .But the above market condition holds only when the market is efficient which is often been ignored in modern research. Another, the original Merton Model is based on assumptions that in the event of default absolute priority holds, renegotiation is not permitted , liquidation of the firm is costless and in the Merton Model and most of its modified version the default boundary is assumed to be constant which don’t correspond with the reality. So these can influence the level of predictive power of the model. In this paper, we have made some extensions on some of these assumptions underlying the original model. The model is virtually a modification of Merton’s model. In a non-efficient market, we use the stock data to analysis this model. The result shows that the modified model can evaluate the credit risk well in the non-efficient market.

  6. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  7. Gaussian Process Regression for Predictive But Interpretable Machine Learning Models: An Example of Predicting Mental Workload across Tasks.

    Science.gov (United States)

    Caywood, Matthew S; Roberts, Daniel M; Colombe, Jeffrey B; Greenwald, Hal S; Weiland, Monica Z

    2016-01-01

    There is increasing interest in real-time brain-computer interfaces (BCIs) for the passive monitoring of human cognitive state, including cognitive workload. Too often, however, effective BCIs based on machine learning techniques may function as "black boxes" that are difficult to analyze or interpret. In an effort toward more interpretable BCIs, we studied a family of N-back working memory tasks using a machine learning model, Gaussian Process Regression (GPR), which was both powerful and amenable to analysis. Participants performed the N-back task with three stimulus variants, auditory-verbal, visual-spatial, and visual-numeric, each at three working memory loads. GPR models were trained and tested on EEG data from all three task variants combined, in an effort to identify a model that could be predictive of mental workload demand regardless of stimulus modality. To provide a comparison for GPR performance, a model was additionally trained using multiple linear regression (MLR). The GPR model was effective when trained on individual participant EEG data, resulting in an average standardized mean squared error (sMSE) between true and predicted N-back levels of 0.44. In comparison, the MLR model using the same data resulted in an average sMSE of 0.55. We additionally demonstrate how GPR can be used to identify which EEG features are relevant for prediction of cognitive workload in an individual participant. A fraction of EEG features accounted for the majority of the model's predictive power; using only the top 25% of features performed nearly as well as using 100% of features. Subsets of features identified by linear models (ANOVA) were not as efficient as subsets identified by GPR. This raises the possibility of BCIs that require fewer model features while capturing all of the information needed to achieve high predictive accuracy.

  8. Internalizing and externalizing traits predict changes in sleep efficiency in emerging adulthood: An actigraphy study

    Directory of Open Access Journals (Sweden)

    Ashley eYaugher

    2015-10-01

    Full Text Available Research on psychopathology and experimental studies of sleep restriction support a relationship between sleep disruption and both internalizing and externalizing disorders. The objective of the current study was to extend this research by examining sleep, impulsivity, antisocial personality traits, and internalizing traits in a university sample. Three hundred and eighty six individuals (161 males between the ages of 18 and 27 years (M = 18.59, SD = 0.98 wore actigraphs for 7 days and completed established measures of disorder-linked personality traits and sleep quality (i.e., Personality Assessment Inventory, Triarchic Psychopathy Measure, Barratt Impulsiveness Scale-11, and the Pittsburgh Sleep Quality Index. As expected, sleep measures and questionnaire scores fell within the normal range of values and sex differences in sleep and personality were consistent with previous research results. Similar to findings in predominantly male forensic psychiatric settings, higher levels of impulsivity predicted poorer subjective sleep quality in both women and men. Consistent with well-established associations between depression and sleep, higher levels of depression in both sexes predicted poorer subjective sleep quality. Bidirectional analyses showed that better sleep efficiency decreases depression. Finally, moderation analyses showed that gender does have a primary role in sleep efficiency and marginal effects were found. The observed relations between sleep and personality traits in a typical university sample add to converging evidence of the relationship between sleep and psychopathology and may inform our understanding of the development of psychopathology in young adulthood.

  9. Prediction of effluent concentration in a wastewater treatment plant using machine learning models.

    Science.gov (United States)

    Guo, Hong; Jeong, Kwanho; Lim, Jiyeon; Jo, Jeongwon; Kim, Young Mo; Park, Jong-pyo; Kim, Joon Ha; Cho, Kyung Hwa

    2015-06-01

    Of growing amount of food waste, the integrated food waste and waste water treatment was regarded as one of the efficient modeling method. However, the load of food waste to the conventional waste treatment process might lead to the high concentration of total nitrogen (T-N) impact on the effluent water quality. The objective of this study is to establish two machine learning models-artificial neural networks (ANNs) and support vector machines (SVMs), in order to predict 1-day interval T-N concentration of effluent from a wastewater treatment plant in Ulsan, Korea. Daily water quality data and meteorological data were used and the performance of both models was evaluated in terms of the coefficient of determination (R2), Nash-Sutcliff efficiency (NSE), relative efficiency criteria (drel). Additionally, Latin-Hypercube one-factor-at-a-time (LH-OAT) and a pattern search algorithm were applied to sensitivity analysis and model parameter optimization, respectively. Results showed that both models could be effectively applied to the 1-day interval prediction of T-N concentration of effluent. SVM model showed a higher prediction accuracy in the training stage and similar result in the validation stage. However, the sensitivity analysis demonstrated that the ANN model was a superior model for 1-day interval T-N concentration prediction in terms of the cause-and-effect relationship between T-N concentration and modeling input values to integrated food waste and waste water treatment. This study suggested the efficient and robust nonlinear time-series modeling method for an early prediction of the water quality of integrated food waste and waste water treatment process. Copyright © 2015. Published by Elsevier B.V.

  10. Energetics and efficiency of a molecular motor model

    DEFF Research Database (Denmark)

    C. Fogedby, Hans; Svane, Axel

    2013-01-01

    The energetics and efficiency of a linear molecular motor model proposed by Mogilner et al. (Phys. Lett. 237, 297 (1998)) is analyzed from an analytical point of view. The model which is based on protein friction with a track is described by coupled Langevin equations for the motion in combination...... with coupled master equations for the ATP hydrolysis. Here the energetics and efficiency of the motor is addressed using a many body scheme with focus on the efficiency at maximum power (EMP). It is found that the EMP is reduced from about 10 pct in a heuristic description of the motor to about 1 per mille...... when incorporating the full motor dynamics, owing to the strong dissipation associated with the motor action....

  11. Efficient topological compilation for a weakly integral anyonic model

    Science.gov (United States)

    Bocharov, Alex; Cui, Xingshan; Kliuchnikov, Vadym; Wang, Zhenghan

    2016-01-01

    A class of anyonic models for universal quantum computation based on weakly-integral anyons has been recently proposed. While universal set of gates cannot be obtained in this context by anyon braiding alone, designing a certain type of sector charge measurement provides universality. In this paper we develop a compilation algorithm to approximate arbitrary n -qutrit unitaries with asymptotically efficient circuits over the metaplectic anyon model. One flavor of our algorithm produces efficient circuits with upper complexity bound asymptotically in O (32 nlog1 /ɛ ) and entanglement cost that is exponential in n . Another flavor of the algorithm produces efficient circuits with upper complexity bound in O (n 32 nlog1 /ɛ ) and no additional entanglement cost.

  12. An Efficient Cluster Algorithm for CP(N-1) Models

    CERN Document Server

    Beard, B B; Riederer, S; Wiese, U J

    2005-01-01

    We construct an efficient cluster algorithm for ferromagnetic SU(N)-symmetric quantum spin systems. Such systems provide a new regularization for CP(N-1) models in the framework of D-theory, which is an alternative non-perturbative approach to quantum field theory formulated in terms of discrete quantum variables instead of classical fields. Despite several attempts, no efficient cluster algorithm has been constructed for CP(N-1) models in the standard formulation of lattice field theory. In fact, there is even a no-go theorem that prevents the construction of an efficient Wolff-type embedding algorithm. We present various simulations for different correlation lengths, couplings and lattice sizes. We have simulated correlation lengths up to 250 lattice spacings on lattices as large as 640x640 and we detect no evidence for critical slowing down.

  13. Prediction using patient comparison vs. modeling: a case study for mortality prediction.

    Science.gov (United States)

    Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter

    2016-08-01

    Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.

  14. Efficient Adoption and Assessment of Multiple Process Improvement Reference Models

    Directory of Open Access Journals (Sweden)

    Simona Jeners

    2013-06-01

    Full Text Available A variety of reference models such as CMMI, COBIT or ITIL support IT organizations to improve their processes. These process improvement reference models (IRMs cover different domains such as IT development, IT Services or IT Governance but also share some similarities. As there are organizations that address multiple domains and need to coordinate their processes in their improvement we present MoSaIC, an approach to support organizations to efficiently adopt and conform to multiple IRMs. Our solution realizes a semantic integration of IRMs based on common meta-models. The resulting IRM integration model enables organizations to efficiently implement and asses multiple IRMs and to benefit from synergy effects.

  15. Efficient Use of Preisach Hysteresis Model in Computer Aided Design

    Directory of Open Access Journals (Sweden)

    IONITA, V.

    2013-05-01

    Full Text Available The paper presents a practical detailed analysis regarding the use of the classical Preisach hysteresis model, covering all the steps, from measuring the necessary data for the model identification to the implementation in a software code for Computer Aided Design (CAD in Electrical Engineering. An efficient numerical method is proposed and the hysteresis modeling accuracy is tested on magnetic recording materials. The procedure includes the correction of the experimental data, which are used for the hysteresis model identification, taking into account the demagnetizing effect for the sample that is measured in an open-circuit device (a vibrating sample magnetometer.

  16. Efficient robust nonparametric estimation in a semimartingale regression model

    CERN Document Server

    Konev, Victor

    2010-01-01

    The paper considers the problem of robust estimating a periodic function in a continuous time regression model with dependent disturbances given by a general square integrable semimartingale with unknown distribution. An example of such a noise is non-gaussian Ornstein-Uhlenbeck process with the L\\'evy process subordinator, which is used to model the financial Black-Scholes type markets with jumps. An adaptive model selection procedure, based on the weighted least square estimates, is proposed. Under general moment conditions on the noise distribution, sharp non-asymptotic oracle inequalities for the robust risks have been derived and the robust efficiency of the model selection procedure has been shown.

  17. Fuzzy predictive filtering in nonlinear economic model predictive control for demand response

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.;

    2016-01-01

    The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...... problem. Moreover, to reduce the computation time and improve the controller's performance, a fuzzy predictive filter is introduced. With the purpose of testing the developed EMPC, a simulation controlling the temperature levels of an intelligent office building (PowerFlexHouse), with and without fuzzy...

  18. Comparison of tree types of models for the prediction of final academic achievement

    Directory of Open Access Journals (Sweden)

    Silvana Gasar

    2002-12-01

    Full Text Available For efficient prevention of inappropriate secondary school choices and by that academic failure, school counselors need a tool for the prediction of individual pupil's final academic achievements. Using data mining techniques on pupils' data base and expert modeling, we developed several models for the prediction of final academic achievement in an individual high school educational program. For data mining, we used statistical analyses, clustering and two machine learning methods: developing classification decision trees and hierarchical decision models. Using an expert system shell DEX, an expert system, based on a hierarchical multi-attribute decision model, was developed manually. All the models were validated and evaluated from the viewpoint of their applicability. The predictive accuracy of DEX models and decision trees was equal and very satisfying, as it reached the predictive accuracy of an experienced counselor. With respect on the efficiency and difficulties in developing models, and relatively rapid changing of our education system, we propose that decision trees are used in further development of predictive models.

  19. Predictive modeling and reducing cyclic variability in autoignition engines

    Energy Technology Data Exchange (ETDEWEB)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  20. Hybrid CFD/CAA Modeling for Liftoff Acoustic Predictions

    Science.gov (United States)

    Strutzenberg, Louise L.; Liever, Peter A.

    2011-01-01

    This paper presents development efforts at the NASA Marshall Space flight Center to establish a hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) simulation system for launch vehicle liftoff acoustics environment analysis. Acoustic prediction engineering tools based on empirical jet acoustic strength and directivity models or scaled historical measurements are of limited value in efforts to proactively design and optimize launch vehicles and launch facility configurations for liftoff acoustics. CFD based modeling approaches are now able to capture the important details of vehicle specific plume flow environment, identifY the noise generation sources, and allow assessment of the influence of launch pad geometric details and sound mitigation measures such as water injection. However, CFD methodologies are numerically too dissipative to accurately capture the propagation of the acoustic waves in the large CFD models. The hybrid CFD/CAA approach combines the high-fidelity CFD analysis capable of identifYing the acoustic sources with a fast and efficient Boundary Element Method (BEM) that accurately propagates the acoustic field from the source locations. The BEM approach was chosen for its ability to properly account for reflections and scattering of acoustic waves from launch pad structures. The paper will present an overview of the technology components of the CFD/CAA framework and discuss plans for demonstration and validation against test data.

  1. Modeling and design of energy efficient variable stiffness actuators

    NARCIS (Netherlands)

    Visser, L.C.; Carloni, Raffaella; Ünal, Ramazan; Stramigioli, Stefano

    In this paper, we provide a port-based mathematical framework for analyzing and modeling variable stiffness actuators. The framework provides important insights in the energy requirements and, therefore, it is an important tool for the design of energy efficient variable stiffness actuators. Based

  2. Energy efficiency in nonprofit agencies: Creating effective program models

    Energy Technology Data Exchange (ETDEWEB)

    Brown, M.A.; Prindle, B.; Scherr, M.I.; White, D.L.

    1990-08-01

    Nonprofit agencies are a critical component of the health and human services system in the US. It has been clearly demonstrated by programs that offer energy efficiency services to nonprofits that, with minimal investment, they can educe their energy consumption by ten to thirty percent. This energy conservation potential motivated the Department of Energy and Oak Ridge National Laboratory to conceive a project to help states develop energy efficiency programs for nonprofits. The purpose of the project was two-fold: (1) to analyze existing programs to determine which design and delivery mechanisms are particularly effective, and (2) to create model programs for states to follow in tailoring their own plans for helping nonprofits with energy efficiency programs. Twelve existing programs were reviewed, and three model programs were devised and put into operation. The model programs provide various forms of financial assistance to nonprofits and serve as a source of information on energy efficiency as well. After examining the results from the model programs (which are still on-going) and from the existing programs, several replicability factors'' were developed for use in the implementation of programs by other states. These factors -- some concrete and practical, others more generalized -- serve as guidelines for states devising program based on their own particular needs and resources.

  3. Business Models, transparency and efficient stock price formation

    DEFF Research Database (Denmark)

    Nielsen, Christian; Vali, Edward; Hvidberg, Rene

    of this, our hypothesis is that if it is possible to improve, simplify and define the way a company communicates its business model to the market, then it must be possible for the company to create a more efficient price formation of its share. To begin with, we decided to investigate whether transparency...

  4. A new efficient Cluster Algorithm for the Ising Model

    CERN Document Server

    Nyffeler, M; Wiese, U J; Nyfeler, Matthias; Pepe, Michele; Wiese, Uwe-Jens

    2005-01-01

    Using D-theory we construct a new efficient cluster algorithm for the Ising model. The construction is very different from the standard Swendsen-Wang algorithm and related to worm algorithms. With the new algorithm we have measured the correlation function with high precision over a surprisingly large number of orders of magnitude.

  5. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    Science.gov (United States)

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  6. Intelligent predictive model of ventilating capacity of imperial smelt furnace

    Institute of Scientific and Technical Information of China (English)

    唐朝晖; 胡燕瑜; 桂卫华; 吴敏

    2003-01-01

    In order to know the ventilating capacity of imperial smelt furnace (ISF), and increase the output of plumbum, an intelligent modeling method based on gray theory and artificial neural networks(ANN) is proposed, in which the weight values in the integrated model can be adjusted automatically. An intelligent predictive model of the ventilating capacity of the ISF is established and analyzed by the method. The simulation results and industrial applications demonstrate that the predictive model is close to the real plant, the relative predictive error is 0.72%, which is 50% less than the single model, leading to a notable increase of the output of plumbum.

  7. A Prediction Model of the Capillary Pressure J-Function

    Science.gov (United States)

    Xu, W. S.; Luo, P. Y.; Sun, L.; Lin, N.

    2016-01-01

    The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701

  8. Adaptation of Predictive Models to PDA Hand-Held Devices

    Directory of Open Access Journals (Sweden)

    Lin, Edward J

    2008-01-01

    Full Text Available Prediction models using multiple logistic regression are appearing with increasing frequency in the medical literature. Problems associated with these models include the complexity of computations when applied in their pure form, and lack of availability at the bedside. Personal digital assistant (PDA hand-held devices equipped with spreadsheet software offer the clinician a readily available and easily applied means of applying predictive models at the bedside. The purposes of this article are to briefly review regression as a means of creating predictive models and to describe a method of choosing and adapting logistic regression models to emergency department (ED clinical practice.

  9. A model to predict the power output from wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Riso National Lab., Roskilde (Denmark)

    1997-12-31

    This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.

  10. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of

  11. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of new technologies

  12. Efficient reversible watermarking based on adaptive prediction-error expansion and pixel selection.

    Science.gov (United States)

    Li, Xiaolong; Yang, Bin; Zeng, Tieyong

    2011-12-01

    Prediction-error expansion (PEE) is an important technique of reversible watermarking which can embed large payloads into digital images with low distortion. In this paper, the PEE technique is further investigated and an efficient reversible watermarking scheme is proposed, by incorporating in PEE two new strategies, namely, adaptive embedding and pixel selection. Unlike conventional PEE which embeds data uniformly, we propose to adaptively embed 1 or 2 bits into expandable pixel according to the local complexity. This avoids expanding pixels with large prediction-errors, and thus, it reduces embedding impact by decreasing the maximum modification to pixel values. Meanwhile, adaptive PEE allows very large payload in a single embedding pass, and it improves the capacity limit of conventional PEE. We also propose to select pixels of smooth area for data embedding and leave rough pixels unchanged. In this way, compared with conventional PEE, a more sharply distributed prediction-error histogram is obtained and a better visual quality of watermarked image is observed. With these improvements, our method outperforms conventional PEE. Its superiority over other state-of-the-art methods is also demonstrated experimentally.

  13. 基于地铁列车运行引起的振动预测模型的浮置板轨道减振效果研究%Study on Isolation Efficiency of Floating Slab Track Using a Numerical Prediction Model of Metro Traffic Induced Vibrations

    Institute of Scientific and Technical Information of China (English)

    刘卫丰; 刘维宁; 袁扬

    2012-01-01

    A numerical model was presented to predict the vibrations in the tunnel and soil strata induced by passage of Metro trains. In this model, based on the analytical solution of dynamic responses excited by moving loads, the vibrations induced by passage of Metro trains can be calculated if the transfer function in the frequen-cy-wavenumber domain and the moving axle loads in the frequency domain can be obtained. The transfer function was calculated using the three-dimensional coupled periodic finite element-boundary element model in the frequency-wavenumber domain, and the moving axle loads were considered as the contact forces between wheel and rail due to rail unevenness in the frequency domain. The dynamic responses in the section north of the East Gate of Peking University Station of Beijing Metro Line 4 were calculated using the model, and then the isolation efficiency of the floating slab tracks in the section was studied by means of the in-situ vibration experiment and the numerical calculation. The results show as follows: The model has good applicability and can be used to predict the vibrations induced by Metro traffic; laying of floating slab tracks is an effective vibration isolation measure; floating slab tracks can efficiently reduce vibrations in their work frequency range, however, they cannot mitigate vibrations at low frequencies.%针对地铁列车运行引起的隧道及土层振动响应问题提出数值预测模型.该模型根据移动荷载作用下动力响应解,将地铁列车运行引起的振动问题归结为计算频率-波数域内传递函数和频域内移动轴荷载问题.传递函数采用三维周期性有限元-边界元耦合模型计算,移动轴荷载主要考虑为频域内轨道不平顺激励下轮轨接触力.利用上述模型计算北京地铁4号线北京大学东门站北侧区间地铁列车运行引起的振动响应,并结合现场振动实测数据探讨该区间浮置板轨道减振效果.结果表明:模型具

  14. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  15. Gaussian Process Regression for Predictive But Interpretable Machine Learning Models: An Example of Predicting Mental Workload across Tasks

    Science.gov (United States)

    Caywood, Matthew S.; Roberts, Daniel M.; Colombe, Jeffrey B.; Greenwald, Hal S.; Weiland, Monica Z.

    2017-01-01

    There is increasing interest in real-time brain-computer interfaces (BCIs) for the passive monitoring of human cognitive state, including cognitive workload. Too often, however, effective BCIs based on machine learning techniques may function as “black boxes” that are difficult to analyze or interpret. In an effort toward more interpretable BCIs, we studied a family of N-back working memory tasks using a machine learning model, Gaussian Process Regression (GPR), which was both powerful and amenable to analysis. Participants performed the N-back task with three stimulus variants, auditory-verbal, visual-spatial, and visual-numeric, each at three working memory loads. GPR models were trained and tested on EEG data from all three task variants combined, in an effort to identify a model that could be predictive of mental workload demand regardless of stimulus modality. To provide a comparison for GPR performance, a model was additionally trained using multiple linear regression (MLR). The GPR model was effective when trained on individual participant EEG data, resulting in an average standardized mean squared error (sMSE) between true and predicted N-back levels of 0.44. In comparison, the MLR model using the same data resulted in an average sMSE of 0.55. We additionally demonstrate how GPR can be used to identify which EEG features are relevant for prediction of cognitive workload in an individual participant. A fraction of EEG features accounted for the majority of the model’s predictive power; using only the top 25% of features performed nearly as well as using 100% of features. Subsets of features identified by linear models (ANOVA) were not as efficient as subsets identified by GPR. This raises the possibility of BCIs that require fewer model features while capturing all of the information needed to achieve high predictive accuracy. PMID:28123359

  16. Predicting Career Advancement with Structural Equation Modelling

    Science.gov (United States)

    Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia

    2012-01-01

    Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…

  17. Predicting Career Advancement with Structural Equation Modelling

    Science.gov (United States)

    Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia

    2012-01-01

    Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…

  18. Prediction Model of Sewing Technical Condition by Grey Neural Network

    Institute of Scientific and Technical Information of China (English)

    DONG Ying; FANG Fang; ZHANG Wei-yuan

    2007-01-01

    The grey system theory and the artificial neural network technology were applied to predict the sewing technical condition. The representative parameters, such as needle, stitch, were selected. Prediction model was established based on the different fabrics' mechanical properties that measured by KES instrument. Grey relevant degree analysis was applied to choose the input parameters of the neural network. The result showed that prediction model has good precision. The average relative error was 4.08% for needle and 4.25% for stitch.

  19. Active diagnosis of hybrid systems - A model predictive approach

    OpenAIRE

    2009-01-01

    A method for active diagnosis of hybrid systems is proposed. The main idea is to predict the future output of both normal and faulty model of the system; then at each time step an optimization problem is solved with the objective of maximizing the difference between the predicted normal and faulty outputs constrained by tolerable performance requirements. As in standard model predictive control, the first element of the optimal input is applied to the system and the whole procedure is repeate...

  20. Evaluation of Fast-Time Wake Vortex Prediction Models

    Science.gov (United States)

    Proctor, Fred H.; Hamilton, David W.

    2009-01-01

    Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.