Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.
Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F
2013-04-01
In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.
Modeling the prediction of business intelligence system effectiveness.
Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I
2016-01-01
Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important.
Predictive modeling of nanomaterial exposure effects in biological systems
Directory of Open Access Journals (Sweden)
Liu X
2013-09-01
Full Text Available Xiong Liu,1 Kaizhi Tang,1 Stacey Harper,2 Bryan Harper,2 Jeffery A Steevens,3 Roger Xu1 1Intelligent Automation, Inc., Rockville, MD, USA; 2Department of Environmental and Molecular Toxicology, School of Chemical, Biological, and Environmental Engineering, Oregon State University, Corvallis, OR, USA; 3ERDC Environmental Laboratory, Vicksburg, MS, USA Background: Predictive modeling of the biological effects of nanomaterials is critical for industry and policymakers to assess the potential hazards resulting from the application of engineered nanomaterials. Methods: We generated an experimental dataset on the toxic effects experienced by embryonic zebrafish due to exposure to nanomaterials. Several nanomaterials were studied, such as metal nanoparticles, dendrimer, metal oxide, and polymeric materials. The embryonic zebrafish metric (EZ Metric was used as a screening-level measurement representative of adverse effects. Using the dataset, we developed a data mining approach to model the toxic endpoints and the overall biological impact of nanomaterials. Data mining techniques, such as numerical prediction, can assist analysts in developing risk assessment models for nanomaterials. Results: We found several important attributes that contribute to the 24 hours post-fertilization (hpf mortality, such as dosage concentration, shell composition, and surface charge. These findings concur with previous studies on nanomaterial toxicity using embryonic zebrafish. We conducted case studies on modeling the overall effect/impact of nanomaterials and the specific toxic endpoints such as mortality, delayed development, and morphological malformations. The results show that we can achieve high prediction accuracy for certain biological effects, such as 24 hpf mortality, 120 hpf mortality, and 120 hpf heart malformation. The results also show that the weighting scheme for individual biological effects has a significant influence on modeling the overall impact of
Nonlinear turbulence models for predicting strong curvature effects
Institute of Scientific and Technical Information of China (English)
XU Jing-lei; MA Hui-yang; HUANG Yu-ning
2008-01-01
Prediction of the characteristics of turbulent flows with strong streamline curvature, such as flows in turbomachines, curved channel flows, flows around airfoils and buildings, is of great importance in engineering applicatious and poses a very practical challenge for turbulence modeling. In this paper, we analyze qualitatively the curvature effects on the structure of turbulence and conduct numerical simulations of a turbulent U- duct flow with a number of turbulence models in order to assess their overall performance. The models evaluated in this work are some typical linear eddy viscosity turbulence models, nonlinear eddy viscosity turbulence models (NLEVM) (quadratic and cubic), a quadratic explicit algebraic stress model (EASM) and a Reynolds stress model (RSM) developed based on the second-moment closure. Our numerical results show that a cubic NLEVM that performs considerably well in other benchmark turbulent flows, such as the Craft, Launder and Suga model and the Huang and Ma model, is able to capture the major features of the highly curved turbulent U-duct flow, including the damping of turbulence near the convex wall, the enhancement of turbulence near the concave wall, and the subsequent turbulent flow separation. The predictions of the cubic models are quite close to that of the RSM, in relatively good agreement with the experimental data, which suggests that these inodels may be employed to simulate the turbulent curved flows in engineering applications.
Prediction horizon effects on stochastic modelling hints for neural networks
Energy Technology Data Exchange (ETDEWEB)
Drossu, R.; Obradovic, Z. [Washington State Univ., Pullman, WA (United States)
1995-12-31
The objective of this paper is to investigate the relationship between stochastic models and neural network (NN) approaches to time series modelling. Experiments on a complex real life prediction problem (entertainment video traffic) indicate that prior knowledge can be obtained through stochastic analysis both with respect to an appropriate NN architecture as well as to an appropriate sampling rate, in the case of a prediction horizon larger than one. An improvement of the obtained NN predictor is also proposed through a bias removal post-processing, resulting in much better performance than the best stochastic model.
Predicting Cumulative Watershed Effects using Spatially Explicit Models
MacDonald, L. H.; Litschert, S.
2004-12-01
Cumulative watershed effects /(CWEs/) result from the combined effects of land disturbances distributed over both space and time. They are of concern because changes in flow and sediment yields can adversely affect aquatic habitat, channel morphology, water yields, and water quality. The assessment procedures currently used by agencies such as the U.S. Forest Service generally rely on a lumped approach to quantify disturbance, despite the widespread recognition that site conditions and location do matter! The overall goal of our work is to develop spatially-explicit models to quantify changes in flow and sediment yields. Key objectives include: use of readily available GIS data; ease of use for resource managers with minimal GIS experience; modularity so that models can be added or updated; and allowing users to select the models and values for key parameters. The DeltaQ model calculates changes in peak, median, and low flows due to forest management activities and fires. Inputs include GIS data with disturbance polygons, an initial change in flow rate, and the time to recovery. Data from paired watershed studies are provided to help guide the user. The initial version of FORest Erosion Simulation Tools /(FOREST/) calculates sediment production from forest harvest, fires, and unpaved roads. Additional modules are being developed to deliver this sediment to the stream channel and route it to downstream locations. In accordance with our objectives, the user can predict sediment production rates using different empirical equations, assign an initial sediment production rate and a specified linear recovery period, or develop a look-up table based on local knowledge, published values, or data from other models such as WEPP. The required GIS layers vary according to the model/(s/) selected, but generally include past disturbances /(e.g., fires and timber harvest/), roads, and elevation. Outputs include GIS layers and text files that can be subjected to additional
The effects of model and data complexity on predictions from species distributions models
DEFF Research Database (Denmark)
García-Callejas, David; Bastos, Miguel
2016-01-01
study contradicts the widely held view that the complexity of species distributions models has significant effects in their predictive ability while findings support for previous observations that the properties of species distributions data and their relationship with the environment are strong...
Romanach, Stephanie; Watling, James I.; Fletcher, Robert J.; Speroterra, Carolina; Bucklin, David N.; Brandt, Laura A.; Pearlstine, Leonard G.; Escribano, Yesenia; Mazzotti, Frank J.
2014-01-01
Climate change poses new challenges for natural resource managers. Predictive modeling of species–environment relationships using climate envelope models can enhance our understanding of climate change effects on biodiversity, assist in assessment of invasion risk by exotic organisms, and inform life-history understanding of individual species. While increasing interest has focused on the role of uncertainty in future conditions on model predictions, models also may be sensitive to the initial conditions on which they are trained. Although climate envelope models are usually trained using data on contemporary climate, we lack systematic comparisons of model performance and predictions across alternative climate data sets available for model training. Here, we seek to fill that gap by comparing variability in predictions between two contemporary climate data sets to variability in spatial predictions among three alternative projections of future climate. Overall, correlations between monthly temperature and precipitation variables were very high for both contemporary and future data. Model performance varied across algorithms, but not between two alternative contemporary climate data sets. Spatial predictions varied more among alternative general-circulation models describing future climate conditions than between contemporary climate data sets. However, we did find that climate envelope models with low Cohen's kappa scores made more discrepant spatial predictions between climate data sets for the contemporary period than did models with high Cohen's kappa scores. We suggest conservation planners evaluate multiple performance metrics and be aware of the importance of differences in initial conditions for spatial predictions from climate envelope models.
Institute of Scientific and Technical Information of China (English)
Zhongning Huang; Haisong Xu; M.Rounier Luo
2011-01-01
@@ A camera-based model is established to predict the total difference for samples of metallic panels with effect coatings under directional illumination,and the testing results indicate that the model can precisely predict the total difference between samples with metallic coatings with satisfactory consistency to the visual data.Due to the limited amount of testing samples,the model performance should be further developed by increasing the training and testing samples.%A camera-based model is established to predict the total difference for samples of metallic panels with effect coatings under directional illumination, and the testing results indicate that the model can precisely predict the total difference between samples with metallic coatings with satisfactory consistency to the visual data. Due to the limited amount of testing samples, the model performance should be further developed by increasing the training and testing samples.
Models and methods for wind effect prediction; Modeller og metoder til prediktion af vindeffekt
Energy Technology Data Exchange (ETDEWEB)
Joensen, A.
1997-12-31
In this report methods and models for predicting power produced by windmills, are considered. Several methods are suggested and investigated on actual observations of wind speed and the corresponding power. In order to improve the predictions meteorological forecasts are used in the formulation of the models. The methods applied cover non-parametric identification, least squares estimation and local regression. It was found that the meteorological forecasts significantly improved the predictions, and that a combination of non-parametric and parametric modelling, proved to be successful. (au) 38 refs.
Sensitivity analysis of the relative biological effectiveness predicted by the local effect model.
Friedrich, T; Grün, R; Scholz, U; Elsässer, T; Durante, M; Scholz, M
2013-10-07
The relative biological effectiveness (RBE) is a central quantity in particle radiobiology and depends on many physical and biological factors. The local effect model (LEM) allows one to predict the RBE for radiobiologic experiments and particle therapy. In this work the sensitivity of the RBE on its determining factors is elucidated based on monitoring the RBE dependence on the input parameters of the LEM. The relevance and meaning of all parameters are discussed within the formalism of the LEM. While most of the parameters are fixed by experimental constraints, one parameter, the threshold dose Dt, may remain free and is then regarded as a fit parameter to the high LET dose response curve. The influence of each parameter on the RBE is understood in terms of theoretic considerations. The sensitivity analysis has been systematically carried out for fictitious in vitro cell lines or tissues with α/β = 2 Gy and 10 Gy, either irradiated under track segment conditions with a monoenergetic beam or within a spread out Bragg peak. For both irradiation conditions, a change of each of the parameters typically causes an approximately equal or smaller relative change of the predicted RBE values. These results may be used for the assessment of treatment plans and for general uncertainty estimations of the RBE.
Chowdhury, Nadim; Azim, Zubair Al; Alam, Md Hasibul; Niaz, Iftikhar Ahmad; Khosru, Quazi D M
2014-01-01
We propose a physically based analytical compact model to calculate Eigen energies and Wave functions which incorporates penetration effect. The model is applicable for a quantum well structure that frequently appears in modern nano-scale devices. This model is equally applicable for both silicon and III-V devices. Unlike other models already available in the literature, our model can accurately predict all the eigen energies without the inclusion of any fitting parameters. The validity of our model has been checked with numerical simulations and the results show significantly better agreement compared to the available methods.
Evaluating effects of normobaric oxygen therapy in acute stroke with MRI-based predictive models
Directory of Open Access Journals (Sweden)
Wu Ona
2012-03-01
Full Text Available Abstract Background Voxel-based algorithms using acute multiparametric-MRI data have been shown to accurately predict tissue outcome after stroke. We explored the potential of MRI-based predictive algorithms to objectively assess the effects of normobaric oxygen therapy (NBO, an investigational stroke treatment, using data from a pilot study of NBO in acute stroke. Methods The pilot study of NBO enrolled 11 patients randomized to NBO administered for 8 hours, and 8 Control patients who received room-air. Serial MRIs were obtained at admission, during gas therapy, post-therapy, and pre-discharge. Diffusion/perfusion MRI data acquired at admission (pre-therapy was used in generalized linear models to predict the risk of lesion growth at subsequent time points for both treatment scenarios: NBO or Control. Results Lesion volume sizes 'during NBO therapy' predicted by Control-models were significantly larger (P = 0.007 than those predicted by NBO models, suggesting that ischemic lesion growth is attenuated during NBO treatment. No significant difference was found between the predicted lesion volumes at later time-points. NBO-treated patients, despite showing larger lesion volumes on Control-models than NBO-models, tended to have reduced lesion growth. Conclusions This study shows that NBO has therapeutic potential in acute ischemic stroke, and demonstrates the feasibility of using MRI-based algorithms to evaluate novel treatments in early-phase clinical trials.
Do animal models of anxiety predict anxiolytic-like effects of antidepressants?
Borsini, Franco; Podhorna, Jana; Marazziti, Donatella
2002-09-01
Chronically administered antidepressant drugs, particularly selective serotonin (5-HT) reuptake inhibitors (SSRIs), are clinically effective in the treatment of all anxiety disorders, while the clinical effectiveness of "traditional" anxiolytics, such as benzodiazepines (BDZs), is limited to generalised anxiety disorder or acute panic attacks. This implies that animal models of anxiety should be sensitive to SSRIs and other antidepressants in order to have predictive validity. We reviewed the literature on the effects of antidepressants in the so-called animal models of anxiety and found that only the isolation-induced calls in guinea-pig pups may reveal anxiolytic-like action of all antidepressant classes after acute administration. Some other models, such as marble-burying or conditioned-freezing behaviours, and isolation- or shock-induced ultrasonic vocalisation models, may detect anxiolytic-like activity of acutely administered antidepressants, although the sensitivity of these models is usually limited to SSRIs and other drugs affecting 5-HT uptake. The predictive validity of models of "anxiety", such as the plus-maze and light-dark transition tests or stress-induced hyperthermia, appears to be limited to BDZ-related drugs. Far less work has been done on chronic administration of antidepressants in animal anxiety models. Unless and until such studies have been undertaken, the true predictive value of the anxiety models will remain unknown.
Kavetski, D.; Clark, M. P.; Fenicia, F.
2011-12-01
Hydrologists often face sources of uncertainty that dwarf those normally encountered in many engineering and scientific disciplines. Especially when representing large scale integrated systems, internal heterogeneities such as stream networks, preferential flowpaths, vegetation, etc, are necessarily represented with a considerable degree of lumping. The inputs to these models are themselves often the products of sparse observational networks. Given the simplifications inherent in environmental models, especially lumped conceptual models, does it really matter how they are implemented? At the same time, given the complexities usually found in the response surfaces of hydrological models, increasingly sophisticated analysis methodologies are being proposed for sensitivity analysis, parameter calibration and uncertainty assessment. Quite remarkably, rather than being caused by the model structure/equations themselves, in many cases model analysis complexities are consequences of seemingly trivial aspects of the model implementation - often, literally, whether the start-of-step or end-of-step fluxes are used! The extent of problems can be staggering, including (i) degraded performance of parameter optimization and uncertainty analysis algorithms, (ii) erroneous and/or misleading conclusions of sensitivity analysis, parameter inference and model interpretations and, finally, (iii) poor reliability of a calibrated model in predictive applications. While the often nontrivial behavior of numerical approximations has long been recognized in applied mathematics and in physically-oriented fields of environmental sciences, it remains a problematic issue in many environmental modeling applications. Perhaps detailed attention to numerics is only warranted for complicated engineering models? Would not numerical errors be an insignificant component of total uncertainty when typical data and model approximations are present? Is this really a serious issue beyond some rare isolated
Modelling the cutting edge radius size effect for force prediction in micro milling
DEFF Research Database (Denmark)
Bissacco, Giuliano; Hansen, Hans Nørgaard; Jan, Slunsky
2008-01-01
This paper presents a theoretical model for cutting force prediction in micro milling, taking into account the cutting edge radius size effect, the tool run out and the deviation of the chip flow angle from the inclination angle. A parameterization according to the uncut chip thickness to cutting...
Animal models for predicting the efficacy and side effects of antipsychotic drugs
Directory of Open Access Journals (Sweden)
Pedro H. Gobira
2013-01-01
Full Text Available The use of antipsychotic drugs represents an important approach for the treatment of schizophrenia. However, their efficacy is limited to certain symptoms of this disorder, and they induce serious side effects. As a result, there is a strong demand for the development of new drugs, which depends on reliable animal models for pharmacological characterization. The present review discusses the face, construct, and predictive validity of classical animal models for studying the efficacy and side effects of compounds for the treatment of schizophrenia. These models are based on the properties of antipsychotics to impair the conditioned avoidance response and reverse certain behavioral changes induced by psychotomimetic drugs, such as stereotypies, hyperlocomotion, and deficit in prepulse inhibition of the startle response. Other tests, which are not specific to schizophrenia, may predict drug effects on negative and cognitive symptoms, such as deficits in social interaction and memory impairment. Regarding motor side effects, the catalepsy test predicts the liability of a drug to induce Parkinson-like syndrome, whereas vacuous chewing movements predict the liability to induce dyskinesia after chronic treatment. Despite certain limitations, these models may contribute to the development of more safe and efficacious antipsychotic drugs.
Directory of Open Access Journals (Sweden)
Fitri Yakub
2016-01-01
Full Text Available We present a comparative study of model predictive control approaches of two-wheel steering, four-wheel steering, and a combination of two-wheel steering with direct yaw moment control manoeuvres for path-following control in autonomous car vehicle dynamics systems. Single-track mode, based on a linearized vehicle and tire model, is used. Based on a given trajectory, we drove the vehicle at low and high forward speeds and on low and high road friction surfaces for a double-lane change scenario in order to follow the desired trajectory as close as possible while rejecting the effects of wind gusts. We compared the controller based on both simple and complex bicycle models without and with the roll vehicle dynamics for different types of model predictive control manoeuvres. The simulation result showed that the model predictive control gave a better performance in terms of robustness for both forward speeds and road surface variation in autonomous path-following control. It also demonstrated that model predictive control is useful to maintain vehicle stability along the desired path and has an ability to eliminate the crosswind effect.
Moeck, Christian; Von Freyberg, Jana; Schrimer, Maria
2016-04-01
An important question in recharge impact studies is how model choice, structure and calibration period affect recharge predictions. It is still unclear if a certain model type or structure is less affected by running the model on time periods with different hydrological conditions compared to the calibration period. This aspect, however, is crucial to ensure reliable predictions of groundwater recharge. In this study, we quantify and compare the effect of groundwater recharge model choice, model parametrization and calibration period in a systematic way. This analysis was possible thanks to a unique data set from a large-scale lysimeter in a pre-alpine catchment where daily long-term recharge rates are available. More specifically, the following issues are addressed: We systematically evaluate how the choice of hydrological models influences predictions of recharge. We assess how different parameterizations of models due to parameter non-identifiability affect predictions of recharge by applying a Monte Carlo approach. We systematically assess how the choice of calibration periods influences predictions of recharge within a differential split sample test focusing on the model performance under extreme climatic and hydrological conditions. Results indicate that all applied models (simple lumped to complex physically based models) were able to simulate the observed recharge rates for five different calibration periods. However, there was a marked impact of the calibration period when the complete 20 years validation period was simulated. Both, seasonal and annual differences between simulated and observed daily recharge rates occurred when the hydrological conditions were different to the calibration period. These differences were, however, less distinct for the physically based models, whereas the simpler models over- or underestimate the observed recharge depending on the considered season. It is, however, possible to reduce the differences for the simple models by
Using predictive modeling to evaluate the financial effect of disease management.
Whitlock, Terry; Johnston, Kenton
2006-09-01
The objective of this study was to use predictive modeling to evaluate a disease management (DM) program's effect on a chronically ill population. Specifically, diagnostic cost grouping (DCG) predictive modeling was utilized to measure the financial effect of DM in populations of individuals with congestive heart failure and coronary artery disease. The literature of current practices regarding DM's financial effect measurement was reviewed and critiqued--especially with reference to the population-based pre-post method. The time period for the present study is three years, and the variables of interest are financial metrics. Claims data and DM program-specific data covering the 24-month period of 2001 to 2002 and the 24-month period of 2002 to 2003 were analyzed. The mean differences between DCG predicted and actual total claims costs in 2002 and in 2003 were computed. Inflation factors, based on actual health plan population experience for the populations in question, were developed and applied to accurately evaluate financial effect. The preliminary findings suggest that a study design utilizing DCG predictive modeling in evaluating DM program financial impact provides more accurate results compared with the population-based pre-post method currently favored by DM companies.
Institute of Scientific and Technical Information of China (English)
Tao Zhi; Cheng Zeyuan; Zhu Jianqin; Li Haiwang
2016-01-01
A variety of turbulence models were used to perform numerical simulations of heat transfer for hydrocarbon fuel flowing upward and downward through uniformly heated vertical pipes at supercritical pressure. Inlet temperatures varied from 373 K to 663 K, with heat flux rang-ing from 300 kW/m2 to 550 kW/m2. Comparative analyses between predicted and experimental results were used to evaluate the ability of turbulence models to respond to variable thermophys-ical properties of hydrocarbon fuel at supercritical pressure. It was found that the prediction per-formance of turbulence models is mainly determined by the damping function, which enables them to respond differently to local flow conditions. Although prediction accuracy for experimental results varied from condition to condition, the shear stress transport (SST) and launder and sharma models performed better than all other models used in the study. For very small buoyancy-influenced runs, the thermal-induced acceleration due to variations in density lead to the impairment of heat transfer occurring in the vicinity of pseudo-critical points, and heat transfer was enhanced at higher temperatures through the combined action of four thermophysical properties: density, viscosity, thermal conductivity and specific heat. For very large buoyancy-influenced runs, the thermal-induced acceleration effect was over predicted by the LS and AB models.
Directory of Open Access Journals (Sweden)
Tao Zhi
2016-10-01
Full Text Available A variety of turbulence models were used to perform numerical simulations of heat transfer for hydrocarbon fuel flowing upward and downward through uniformly heated vertical pipes at supercritical pressure. Inlet temperatures varied from 373 K to 663 K, with heat flux ranging from 300 kW/m2 to 550 kW/m2. Comparative analyses between predicted and experimental results were used to evaluate the ability of turbulence models to respond to variable thermophysical properties of hydrocarbon fuel at supercritical pressure. It was found that the prediction performance of turbulence models is mainly determined by the damping function, which enables them to respond differently to local flow conditions. Although prediction accuracy for experimental results varied from condition to condition, the shear stress transport (SST and launder and sharma models performed better than all other models used in the study. For very small buoyancy-influenced runs, the thermal-induced acceleration due to variations in density lead to the impairment of heat transfer occurring in the vicinity of pseudo-critical points, and heat transfer was enhanced at higher temperatures through the combined action of four thermophysical properties: density, viscosity, thermal conductivity and specific heat. For very large buoyancy-influenced runs, the thermal-induced acceleration effect was over predicted by the LS and AB models.
Effect of patient location on the performance of clinical models to predict pulmonary embolism.
Ollenberger, Glenn P; Worsley, Daniel F
2006-01-01
Current clinical likelihood models for predicting pulmonary embolism (PE) are used to categorize outpatients into low, intermediate and high clinical pre-test likelihood of PE. Since these clinical prediction rules were developed using outpatients it is not known if they can be applied universally to both inpatients and outpatients with suspected PE. Thus, the purpose of this study was to determine the effect of patient location on the performance of clinical models to predict PE. Two clinical models (Wells and Wicki) were applied to data from the multi-centered PIOPED study. The Wells score was applied to 1359 patients and the Wicki score was applied to 998 patients. 361 patients (27%) from the PIOPED study did not have arterial gas measurement and were excluded from the Wicki score patient group. Patients were stratified by their location at the time of entry into the PIOPED study as follows: outpatient/emergency, surgical ward, medicine/coronary care unit or intensive care unit. The diagnostic performance of the two clinical models was applied to the various patient locations and the performance was evaluated using the area under a fitted receiver operating characteristic curve (AUC). The prevalence of PE in the three clinical probability categories were similar for the two scoring methods. Both clinical models yielded the lowest diagnostic performance in patients referred from surgical wards. The AUC for both clinical prediction rules decreased significantly when applied to inpatients in comparison to outpatients. Current clinical prediction rules for determining the pre-test likelihood of PE yielded different diagnostic performances depending upon patient location. The performance of the clinical prediction rules decreased significantly when applied to inpatients. In particular, the rules performed least well when applied to patients referred from surgical wards suggesting these rules should not be used in this patient group. As expected the clinical
Kumar, Prashant; Topin, Frédéric
2017-08-01
It is often desirable to predict the effective thermal conductivity (ETC) of a homogenous material like open-cell foams based on its composition, particularly when variations in composition are expected. A combination of five fundamental simplified thermal conductivity bounds and models (series, parallel, Hashin-Shtrikman, effective medium theory, and reciprocity models) is proposed to predict ETC of open-cell foams. Usually, these models use a parameter as the weighted mean to account the proportion of each bound arranged in arithmetic and geometric schemes. Based on ETC data obtained on numerous virtual Kelvin-like foam samples, the dependence of this parameter has been deduced as a function of morphology and phase thermal conductivity ratio. Various effective thermal conductivity correlations are derived based on material properties and foam structure. This is valid for open-cell foams filled with any arbitrary working fluid over a solid conductivity of materials range (λs /λf = 10-30,000) and over a wide range of porosity (0.60 < ɛo < 0.95). Arrangement of series and parallel models together using the simplest models for both, arithmetic and geometric schemes, is found to predict excellent results among all the generic combinations.
Kumar, Prashant; Topin, Frédéric
2017-02-01
It is often desirable to predict the effective thermal conductivity (ETC) of a homogenous material like open-cell foams based on its composition, particularly when variations in composition are expected. A combination of five fundamental simplified thermal conductivity bounds and models (series, parallel, Hashin-Shtrikman, effective medium theory, and reciprocity models) is proposed to predict ETC of open-cell foams. Usually, these models use a parameter as the weighted mean to account the proportion of each bound arranged in arithmetic and geometric schemes. Based on ETC data obtained on numerous virtual Kelvin-like foam samples, the dependence of this parameter has been deduced as a function of morphology and phase thermal conductivity ratio. Various effective thermal conductivity correlations are derived based on material properties and foam structure. This is valid for open-cell foams filled with any arbitrary working fluid over a solid conductivity of materials range (λs /λf = 10-30,000) and over a wide range of porosity (0.60 < &epsilono < 0.95). Arrangement of series and parallel models together using the simplest models for both, arithmetic and geometric schemes, is found to predict excellent results among all the generic combinations.
DEFF Research Database (Denmark)
Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael;
2013-01-01
, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone...
Directory of Open Access Journals (Sweden)
M. Sudha
2015-12-01
Full Text Available Uncertain atmosphere is a prevalent factor affecting the existing prediction approaches. Rough set and fuzzy set theories as proposed by Pawlak and Zadeh have become an effective tool for handling vagueness and fuzziness in the real world scenarios. This research work describes the impact of Hybrid Intelligent System (HIS for strategic decision support in meteorology. In this research a novel exhaustive search based Rough set reduct Selection using Genetic Algorithm (RSGA is introduced to identify the significant input feature subset. The proposed model could identify the most effective weather parameters efficiently than other existing input techniques. In the model evaluation phase two adaptive techniques were constructed and investigated. The proposed Artificial Neural Network based on Back Propagation learning (ANN-BP and Adaptive Neuro Fuzzy Inference System (ANFIS was compared with existing Fuzzy Unordered Rule Induction Algorithm (FURIA, Structural Learning Algorithm on Vague Environment (SLAVE and Particle Swarm OPtimization (PSO. The proposed rainfall prediction models outperformed when trained with the input generated using RSGA. A meticulous comparison of the performance indicates ANN-BP model as a suitable HIS for effective rainfall prediction. The ANN-BP achieved 97.46% accuracy with a nominal misclassification rate of 0.0254 %.
Bayerstadler, Andreas; Benstetter, Franz; Heumann, Christian; Winter, Fabian
2014-09-01
Predictive Modeling (PM) techniques are gaining importance in the worldwide health insurance business. Modern PM methods are used for customer relationship management, risk evaluation or medical management. This article illustrates a PM approach that enables the economic potential of (cost-) effective disease management programs (DMPs) to be fully exploited by optimized candidate selection as an example of successful data-driven business management. The approach is based on a Generalized Linear Model (GLM) that is easy to apply for health insurance companies. By means of a small portfolio from an emerging country, we show that our GLM approach is stable compared to more sophisticated regression techniques in spite of the difficult data environment. Additionally, we demonstrate for this example of a setting that our model can compete with the expensive solutions offered by professional PM vendors and outperforms non-predictive standard approaches for DMP selection commonly used in the market.
Ma, Zhuanglin; Zhang, Honglu; Chien, Steven I-Jy; Wang, Jin; Dong, Chunjiao
2017-01-01
To investigate the relationship between crash frequency and potential influence factors, the accident data for events occurring on a 50km long expressway in China, including 567 crash records (2006-2008), were collected and analyzed. Both the fixed-length and the homogeneous longitudinal grade methods were applied to divide the study expressway section into segments. A negative binomial (NB) model and a random effect negative binomial (RENB) model were developed to predict crash frequency. The parameters of both models were determined using the maximum likelihood (ML) method, and the mixed stepwise procedure was applied to examine the significance of explanatory variables. Three explanatory variables, including longitudinal grade, road width, and ratio of longitudinal grade and curve radius (RGR), were found as significantly affecting crash frequency. The marginal effects of significant explanatory variables to the crash frequency were analyzed. The model performance was determined by the relative prediction error and the cumulative standardized residual. The results show that the RENB model outperforms the NB model. It was also found that the model performance with the fixed-length segment method is superior to that with the homogeneous longitudinal grade segment method.
Lee, Dongchul; Hershey, Brad; Bradley, Kerry; Yearwood, Thomas
2011-07-01
To understand the theoretical effects of pulse width (PW) programming in spinal cord stimulation (SCS), we implemented a mathematical model of electrical fields and neural activation in SCS to gain insight into the effects of PW programming. The computational model was composed of a finite element model for structure and electrical properties, coupled with a nonlinear double-cable axon model to predict nerve excitation for different myelinated fiber sizes. Mathematical modeling suggested that mediolateral lead position may affect chronaxie and rheobase values, as well as predict greater activation of medial dorsal column fibers with increased PW. These modeling results were validated by a companion clinical study. Thus, variable PW programming in SCS appears to have theoretical value, demonstrated by the ability to increase and even 'steer' spatial selectivity of dorsal column fiber recruitment. It is concluded that the computational SCS model is a valuable tool to understand basic mechanisms of nerve fiber excitation modulated by stimulation parameters such as PW and electric fields.
Meta-analysis of choice set generation effects on route choice model estimates and predictions
DEFF Research Database (Denmark)
Prato, Carlo Giacomo
2012-01-01
Large scale applications of behaviorally realistic transport models pose several challenges to transport modelers on both the demand and the supply sides. On the supply side, path-based solutions to the user assignment equilibrium problem help modelers in enhancing the route choice behavior...... modeling, but require them to generate choice sets by selecting a path generation technique and its parameters according to personal judgments. This paper proposes a methodology and an experimental setting to provide general indications about objective judgments for an effective route choice set generation....... Initially, path generation techniques are implemented within a synthetic network to generate possible subjective choice sets considered by travelers. Next, ‘true model estimates’ and ‘postulated predicted routes’ are assumed from the simulation of a route choice model. Then, objective choice sets...
Levy, R.; Mcginness, H.
1976-01-01
Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.
Predicting effects of structural stress in a genome-reduced model bacterial metabolism
Güell, Oriol; Sagués, Francesc; Serrano, M. Ángeles
2012-08-01
Mycoplasma pneumoniae is a human pathogen recently proposed as a genome-reduced model for bacterial systems biology. Here, we study the response of its metabolic network to different forms of structural stress, including removal of individual and pairs of reactions and knockout of genes and clusters of co-expressed genes. Our results reveal a network architecture as robust as that of other model bacteria regarding multiple failures, although less robust against individual reaction inactivation. Interestingly, metabolite motifs associated to reactions can predict the propagation of inactivation cascades and damage amplification effects arising in double knockouts. We also detect a significant correlation between gene essentiality and damages produced by single gene knockouts, and find that genes controlling high-damage reactions tend to be expressed independently of each other, a functional switch mechanism that, simultaneously, acts as a genetic firewall to protect metabolism. Prediction of failure propagation is crucial for metabolic engineering or disease treatment.
The effect of scaling physiological cross-sectional area on musculoskeletal model predictions.
Bolsterlee, Bart; Vardy, Alistair N; van der Helm, Frans C T; Veeger, H E J DirkJan
2015-07-16
Personalisation of model parameters is likely to improve biomechanical model predictions and could allow models to be used for subject- or patient-specific applications. This study evaluates the effect of personalising physiological cross-sectional areas (PCSA) in a large-scale musculoskeletal model of the upper extremity. Muscle volumes obtained from MRI were used to scale PCSAs of five subjects, for whom the maximum forces they could exert in six different directions on a handle held by the hand were also recorded. The effect of PCSA scaling was evaluated by calculating the lowest maximum muscle stress (σmax, a constant for human skeletal muscle) required by the model to reproduce these forces. When the original cadaver-based PCSA-values were used, strongly different between-subject σmax-values were found (σmax=106.1±39.9 N cm(-2)). A relatively simple, uniform scaling routine reduced this variation substantially (σmax=69.4±9.4 N cm(-2)) and led to similar results to when a more detailed, muscle-specific scaling routine was used (σmax=71.2±10.8 N cm(-2)). Using subject-specific PCSA values to simulate an shoulder abduction task changed muscle force predictions for the subscapularis and the pectoralis major on average by 33% and 21%, respectively, but was force changed less than 1.5% as a result of scaling. We conclude that individualisation of the model's strength can most easily be done by scaling PCSA with a single factor that can be derived from muscle volume data or, alternatively, from maximum force measurements. However, since PCSA scaling only marginally changed muscle and joint contact force predictions for submaximal tasks, the need for PCSA scaling remains debatable. Copyright © 2015 Elsevier Ltd. All rights reserved.
Effective index model predicts modal frequencies of vertical-cavity lasers
Energy Technology Data Exchange (ETDEWEB)
SERKLAND,DARWIN K.; HADLEY,G. RONALD; CHOQUETTE,KENT D.; GEIB,KENT M.; ALLERMAN,ANDREW A.
2000-04-18
Previously, an effective index optical model was introduced for the analysis of lateral waveguiding effects in vertical-cavity surface-emitting lasers. The authors show that the resultant transverse equation is almost identical to the one typically obtained in the analysis of dielectric waveguide problems, such as a step-index optical fiber. The solution to the transverse equation yields the lateral dependence of the optical field and, as is recognized in this paper, the discrete frequencies of the microcavity modes. As an example, they apply this technique to the analysis of vertical-cavity lasers that contain thin-oxide apertures. The model intuitively explains the experimental data and makes quantitative predictions in good agreement with a highly accurate numerical model.
Directory of Open Access Journals (Sweden)
Barbara D. Klein
1999-01-01
Full Text Available Although databases used in many organizations have been found to contain errors, little is known about the effect of these errors on predictions made by linear regression models. The paper uses a real-world example, the prediction of the net asset values of mutual funds, to investigate the effect of data quality on linear regression models. The results of two experiments are reported. The first experiment shows that the error rate and magnitude of error in data used in model prediction negatively affect the predictive accuracy of linear regression models. The second experiment shows that the error rate and the magnitude of error in data used to build the model positively affect the predictive accuracy of linear regression models. All findings are statistically significant. The findings have managerial implications for users and builders of linear regression models.
Potter, Gail E; Smieszek, Timo; Sailer, Kerstin
2015-09-01
Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0-5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models.
Accurate prediction of Phytophthora infestans outbreaks is crucial for effective late blight management. The SIMBLIGHT1, SIMPHYT1, and modified SIMPHYT1 models were assessed for predicting late blight outbreaks relative to the NOBLIGHT model using climatic data from field experiments at Presque Isle...
Moeeni, Hamid; Bonakdari, Hossein; Fatemi, Seyed Ehsan
2017-04-01
Because time series stationarization has a key role in stochastic modeling results, three methods are analyzed in this study. The methods are seasonal differencing, seasonal standardization and spectral analysis to eliminate the periodic effect on time series stationarity. First, six time series including 4 streamflow series and 2 water temperature series are stationarized. The stochastic term for these series obtained with ARIMA is subsequently modeled. For the analysis, 9228 models are introduced. It is observed that seasonal standardization and spectral analysis eliminate the periodic term completely, while seasonal differencing maintains seasonal correlation structures. The obtained results indicate that all three methods present acceptable performance overall. However, model accuracy in monthly streamflow prediction is higher with seasonal differencing than with the other two methods. Another advantage of seasonal differencing over the other methods is that the monthly streamflow is never estimated as negative. Standardization is the best method for predicting monthly water temperature although it is quite similar to seasonal differencing, while spectral analysis performed the weakest in all cases. It is concluded that for each monthly seasonal series, seasonal differencing is the best stationarization method in terms of periodic effect elimination. Moreover, the monthly water temperature is predicted with more accuracy than monthly streamflow. The criteria of the average stochastic term divided by the amplitude of the periodic term obtained for monthly streamflow and monthly water temperature were 0.19 and 0.30, 0.21 and 0.13, and 0.07 and 0.04 respectively. As a result, the periodic term is more dominant than the stochastic term for water temperature in the monthly water temperature series compared to streamflow series.
Predicting effects of cold shock: modeling the decline of a thermal plume
Energy Technology Data Exchange (ETDEWEB)
Becker, C.D.; Trent, D.S.; Schneider, M.J.
1977-10-01
Predicting direct impact of cold shock on aquatic organisms after termination of power plant thermal discharges requires thermal tests that provide quantitative data on the resistance of acclimated species to lower temperatures. Selected examples from the literature on cold shock resistance of freshwater and marine fishes are illustrated to show predictive use. Abrupt cold shock data may be applied to field situations involving either abrupt or gradual temperature declines but yield conservative estimates under the latter conditions. Gradual cold shock data may be applied where heated plumes gradually dissipate because poikilotherms partially compensate for lowering temperature regimes. A simplified analytical model is presented for estimating thermal declines in terminated plumes originating from offshore, submerged discharges where shear current and boundary effects are minimal. When applied to site-specific conditions, the method provides time-temperature distributions for correlation with cold resistance data and, therefore, aids in assessing cold shock impact on aquatic biota.
Gidon, Dogan; Graves, David B.; Mesbah, Ali
2017-08-01
Atmospheric pressure plasma jets (APPJs) have been identified as a promising tool for plasma medicine. This paper aims to demonstrate the importance of using model-based feedback control strategies for safe, reproducible, and therapeutically effective application of APPJs for dose delivery to a target substrate. Key challenges in model-based control of APPJs arise from: (i) the multivariable, nonlinear nature of system dynamics, (ii) the need for constraining the system operation within an operating region that ensures safe plasma treatment, and (iii) the cumulative, nondecreasing nature of dose metrics. To systematically address these challenges, we propose a model predictive control (MPC) strategy for real-time feedback control of a radio-frequency APPJ in argon. To this end, a lumped-parameter, physics-based model is developed for describing the jet dynamics. Cumulative dose metrics are defined for quantifying the thermal and nonthermal energy effects of the plasma on substrate. The closed-loop performance of the MPC strategy is compared to that of a basic proportional-integral control system. Simulation results indicate that the MPC stategy provides a versatile framework for dose delivery in the presence of disturbances, while the safety and practical constraints of the APPJ operation can be systematically handled. Model-based feedback control strategies can lead to unprecedented opportunities for effective dose delivery in plasma medicine.
Directory of Open Access Journals (Sweden)
Lorenzo Rakesh Sewanan
2016-10-01
Full Text Available Point mutations to the human gene TPM1 have been implicated in the development of both hypertrophic and dilated cardiomyopathies. Such observations have led to studies investigating the link between single residue changes and the biophysical behavior of the tropomyosin molecule. However, the degree to which these molecular perturbations explain the performance of intact sarcomeres containing mutant tropomyosin remains uncertain. Here, we present a modeling approach that integrates various aspects of tropomyosin’s molecular properties into a cohesive paradigm representing their impact on muscle function. In particular, we considered the effects of tropomyosin mutations on (1 persistence length, (2 equilibrium between thin filament blocked and closed regulatory states, and (3 the crossbridge duty cycle. After demonstrating the ability of the new model to capture Ca-dependent myofilament responses during both dynamic and steady-state activation, we used it to capture the effects of hypertrophic cardiomyopathy (HCM related E180G and D175N mutations on skinned myofiber mechanics. Our analysis indicates that the fiber-level effects of the two mutations can be accurately described by a combination of changes to the three tropomyosin properties represented in the model. Subsequently, we used the model to predict mutation effects on muscle twitch. Both mutations led to increased twitch contractility as a consequence of diminished cooperative inhibition between thin filament regulatory units. Overall, simulations suggest that a common twitch phenotype for HCM-linked tropomyosin mutations includes both increased contractility and elevated diastolic tension.
Modelling strategies to predict the multi-scale effects of rural land management change
Bulygina, N.; Ballard, C. E.; Jackson, B. M.; McIntyre, N.; Marshall, M.; Reynolds, B.; Wheater, H. S.
2011-12-01
Changes to the rural landscape due to agricultural land management are ubiquitous, yet predicting the multi-scale effects of land management change on hydrological response remains an important scientific challenge. Much empirical research has been of little generic value due to inadequate design and funding of monitoring programmes, while the modelling issues challenge the capability of data-based, conceptual and physics-based modelling approaches. In this paper we report on a major UK research programme, motivated by a national need to quantify effects of agricultural intensification on flood risk. Working with a consortium of farmers in upland Wales, a multi-scale experimental programme (from experimental plots to 2nd order catchments) was developed to address issues of upland agricultural intensification. This provided data support for a multi-scale modelling programme, in which highly detailed physics-based models were conditioned on the experimental data and used to explore effects of potential field-scale interventions. A meta-modelling strategy was developed to represent detailed modelling in a computationally-efficient manner for catchment-scale simulation; this allowed catchment-scale quantification of potential management options. For more general application to data-sparse areas, alternative approaches were needed. Physics-based models were developed for a range of upland management problems, including the restoration of drained peatlands, afforestation, and changing grazing practices. Their performance was explored using literature and surrogate data; although subject to high levels of uncertainty, important insights were obtained, of practical relevance to management decisions. In parallel, regionalised conceptual modelling was used to explore the potential of indices of catchment response, conditioned on readily-available catchment characteristics, to represent ungauged catchments subject to land management change. Although based in part on
Melanoma risk prediction models
Directory of Open Access Journals (Sweden)
Nikolić Jelena
2014-01-01
Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were
Cestari, Andrea
2013-01-01
Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.
Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella
2016-12-09
Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.
The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling
Dahl, Milo D.; Khavaran, Abbas
2010-01-01
Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.
Directory of Open Access Journals (Sweden)
Huili eYuan
2016-04-01
Full Text Available The biomass composition represented in constraint-based metabolic models is a key component for predicting cellular metabolism using flux balance analysis (FBA. Despite major advances in analytical technologies, it is often challenging to obtain a detailed composition of all major biomass components experimentally. Studies examining the influence of the biomass composition on the predictions of metabolic models have so far mostly been done on models of microorganisms. Little is known about the impact of varying biomass composition on flux prediction in FBA models of plants, whose metabolism is very versatile and complex because of the presence of multiple subcellular compartments. Also, the published metabolic models of plants differ in size and complexity. In this study, we examined the sensitivity of the predicted fluxes of plant metabolic models to biomass composition and model structure. These questions were addressed by evaluating the sensitivity of predictions of growth rates and central carbon metabolic fluxes to varying biomass compositions in three different genome-/large-scale metabolic models of Arabidopsis thaliana. Our results showed that fluxes through the central carbon metabolism were robust to changes in biomass composition. Nevertheless, comparisons between the predictions from three models using identical modelling constraints and objective function showed that model predictions were sensitive to the structure of the models, highlighting large discrepancies between the published models.
Energy Technology Data Exchange (ETDEWEB)
Drover, Damion, Ryan
2011-12-01
One of the largest exports in the Southeast U.S. is forest products. Interest in biofuels using forest biomass has increased recently, leading to more research into better forest management BMPs. The USDA Forest Service, along with the Oak Ridge National Laboratory, University of Georgia and Oregon State University are researching the impacts of intensive forest management for biofuels on water quality and quantity at the Savannah River Site in South Carolina. Surface runoff of saturated areas, transporting excess nutrients and contaminants, is a potential water quality issue under investigation. Detailed maps of variable source areas and soil characteristics would therefore be helpful prior to treatment. The availability of remotely sensed and computed digital elevation models (DEMs) and spatial analysis tools make it easy to calculate terrain attributes. These terrain attributes can be used in models to predict saturated areas or other attributes in the landscape. With laser altimetry, an area can be flown to produce very high resolution data, and the resulting data can be resampled into any resolution of DEM desired. Additionally, there exist many maps that are in various resolutions of DEM, such as those acquired from the U.S. Geological Survey. Problems arise when using maps derived from different resolution DEMs. For example, saturated areas can be under or overestimated depending on the resolution used. The purpose of this study was to examine the effects of DEM resolution on the calculation of topographic wetness indices used to predict variable source areas of saturation, and to find the best resolutions to produce prediction maps of soil attributes like nitrogen, carbon, bulk density and soil texture for low-relief, humid-temperate forested hillslopes. Topographic wetness indices were calculated based on the derived terrain attributes, slope and specific catchment area, from five different DEM resolutions. The DEMs were resampled from LiDAR, which is a
MODEL PREDICTIVE CONTROL FUNDAMENTALS
African Journals Online (AJOL)
2012-07-02
Jul 2, 2012 ... paper, we will present an introduction to the theory and application of MPC with Matlab codes written to ... model predictive control, linear systems, discrete-time systems, ... and then compute very rapidly for this open-loop con-.
Directory of Open Access Journals (Sweden)
Francesco Cozzoli
Full Text Available Human infrastructures can modify ecosystems, thereby affecting the occurrence and spatial distribution of organisms, as well as ecosystem functionality. Sustainable development requires the ability to predict responses of species to anthropogenic pressures. We investigated the large scale, long term effect of important human alterations of benthic habitats with an integrated approach combining engineering and ecological modelling. We focused our analysis on the Oosterschelde basin (The Netherlands, which was partially embanked by a storm surge barrier (Oosterscheldekering, 1986. We made use of 1 a prognostic (numerical environmental (hydrodynamic model and 2 a novel application of quantile regression to Species Distribution Modeling (SDM to simulate both the realized and potential (habitat suitability abundance of four macrozoobenthic species: Scoloplos armiger, Peringia ulvae, Cerastoderma edule and Lanice conchilega. The analysis shows that part of the fluctuations in macrozoobenthic biomass stocks during the last decades is related to the effect of the coastal defense infrastructures on the basin morphology and hydrodynamics. The methodological framework we propose is particularly suitable for the analysis of large abundance datasets combined with high-resolution environmental data. Our analysis provides useful information on future changes in ecosystem functionality induced by human activities.
Syfert, Mindy M; Smith, Matthew J; Coomes, David A
2013-01-01
Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.
Directory of Open Access Journals (Sweden)
Juan Guillermo eDiaz Ochoa
2013-01-01
Full Text Available In this study, we focus on a novel multi-scale modeling approach for spatiotemporal prediction of the distribution of substances and resulting hepatotoxicity by combining cellular models, a 2D liver model, and whole-body model. As a case study, we focused on predicting human hepatotoxicity upon treatment with acetaminophen based on in vitro toxicity data and potential inter-individual variability in gene expression and enzyme activities. By aggregating mechanistic, genome-based in silico cells to a novel 2D liver model and eventually to a whole body model, we predicted pharmacokinetic properties, metabolism, and the onset of hepatotoxicity in an in silico patient. Depending on the concentration of acetaminophen in the liver and the accumulation of toxic metabolites, cell integrity in the liver as a function of space and time as well as changes in the elimination rate of substances were estimated. We show that the variations in elimination rates also influence the distribution of acetaminophen and its metabolites in the whole body. Our results are in agreement with experimental results. What is more, the integrated model also predicted variations in drug toxicity depending on alterations of metabolic enzyme activities. Variations in enzyme activity, in turn, reflect genetic characteristics or diseases of individuals. In conclusion, this framework presents an important basis for efficiently integrating inter-individual variability data into models, paving the way for personalized or stratified predictions of drug toxicity and efficacy.
Xiong, Dapeng; Liu, Rongjie; Xiao, Fen; Gao, Xieping
2014-12-01
The core promoters play significant and extensive roles for the initiation and regulation of DNA transcription. The identification of core promoters is one of the most challenging problems yet. Due to the diverse nature of core promoters, the results obtained through existing computational approaches are not satisfactory. None of them considered the potential influence on performance of predictive approach resulted by the interference between neighboring TSSs in TSS clusters. In this paper, we sufficiently considered this main factor and proposed an approach to locate potential TSS clusters according to the correlation of regional profiles of DNA and TSS clusters. On this basis, we further presented a novel computational approach (ProMT) for promoter prediction using Markov chain model and predictive TSS clusters based on structural properties of DNA. Extensive experiments demonstrated that ProMT can significantly improve the predictive performance. Therefore, considering interference between neighboring TSSs is essential for a wider range of promoter prediction.
Nominal model predictive control
Grüne, Lars
2013-01-01
5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...
Nominal Model Predictive Control
Grüne, Lars
2014-01-01
5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...
Candidate Prediction Models and Methods
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik
2005-01-01
This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....
Directory of Open Access Journals (Sweden)
GOYAL Kumar Gyanendra
2012-10-01
Full Text Available This paper presents the suitability of artificial neural network (ANN models for predicting the shelf life of processed cheese stored at 7-8ºC. Soluble nitrogen, pH; standard plate count, yeast & mould count, and spore count were input variables, and sensory score was output variable. Mean square error, root mean square error, coefficient of determination and Nash - sutcliffo coefficient were used in order to test the effectiveness of the developed ANN models. Excellent agreement was found between experimental results and these mathematical parameters, thus confirming that ANN models are very effective in predicting the shelf life of processed cheese.
Angelieri, Cintia Camila Silva; Adams-Hosking, Christine; Ferraz, Katia Maria Paschoaletto Micchi de Barros; de Souza, Marcelo Pereira; McAlpine, Clive Alexander
2016-01-01
A mosaic of intact native and human-modified vegetation use can provide important habitat for top predators such as the puma (Puma concolor), avoiding negative effects on other species and ecological processes due to cascade trophic interactions. This study investigates the effects of restoration scenarios on the puma's habitat suitability in the most developed Brazilian region (São Paulo State). Species Distribution Models incorporating restoration scenarios were developed using the species' occurrence information to (1) map habitat suitability of pumas in São Paulo State, Southeast, Brazil; (2) test the relative contribution of environmental variables ecologically relevant to the species habitat suitability and (3) project the predicted habitat suitability to future native vegetation restoration scenarios. The Maximum Entropy algorithm was used (Test AUC of 0.84 ± 0.0228) based on seven environmental non-correlated variables and non-autocorrelated presence-only records (n = 342). The percentage of native vegetation (positive influence), elevation (positive influence) and density of roads (negative influence) were considered the most important environmental variables to the model. Model projections to restoration scenarios reflected the high positive relationship between pumas and native vegetation. These projections identified new high suitability areas for pumas (probability of presence >0.5) in highly deforested regions. High suitability areas were increased from 5.3% to 8.5% of the total State extension when the landscapes were restored for ≥ the minimum native vegetation cover rule (20%) established by the Brazilian Forest Code in private lands. This study highlights the importance of a landscape planning approach to improve the conservation outlook for pumas and other species, including not only the establishment and management of protected areas, but also the habitat restoration on private lands. Importantly, the results may inform environmental
Predictive Surface Complexation Modeling
Energy Technology Data Exchange (ETDEWEB)
Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences
2016-11-29
Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO_{2} and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.
Refining Sunrise/set Prediction Models by Accounting for the Effects of Refraction
Wilson, Teresa; Bartlett, Jennifer L.
2016-01-01
Current atmospheric models used to predict the times of sunrise and sunset have an error of one to four minutes at mid-latitudes (0° - 55° N/S). At higher latitudes, slight changes in refraction may cause significant discrepancies, including determining even whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols, could significantly improve the standard prediction. Because sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem, we will collect this data using smartphones as part of a citizen science project. This analysis will lead to more complete models that will provide more accurate times for navigators and outdoorsman alike.
White, Jeremy T.; Langevin, Christian D.; Hughes, Joseph D.
2010-01-01
Calibration of highly‐parameterized numerical models typically requires explicit Tikhonovtype regularization to stabilize the inversion process. This regularization can take the form of a preferred parameter values scheme or preferred relations between parameters, such as the preferred equality scheme. The resulting parameter distributions calibrate the model to a user‐defined acceptable level of model‐to‐measurement misfit, and also minimize regularization penalties on the total objective function. To evaluate the potential impact of these two regularization schemes on model predictive ability, a dataset generated from a synthetic model was used to calibrate a highly-parameterized variable‐density SEAWAT model. The key prediction is the length of time a synthetic pumping well will produce potable water. A bi‐objective Pareto analysis was used to explicitly characterize the relation between two competing objective function components: measurement error and regularization error. Results of the Pareto analysis indicate that both types of regularization schemes affect the predictive ability of the calibrated model.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Energy Technology Data Exchange (ETDEWEB)
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steven B.
2013-07-23
obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-09-01
obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Candidate Prediction Models and Methods
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik
2005-01-01
This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...
Bondi, Robert W; Igne, Benoît; Drennen, James K; Anderson, Carl A
2012-12-01
Near-infrared spectroscopy (NIRS) is a valuable tool in the pharmaceutical industry, presenting opportunities for online analyses to achieve real-time assessment of intermediates and finished dosage forms. The purpose of this work was to investigate the effect of experimental designs on prediction performance of quantitative models based on NIRS using a five-component formulation as a model system. The following experimental designs were evaluated: five-level, full factorial (5-L FF); three-level, full factorial (3-L FF); central composite; I-optimal; and D-optimal. The factors for all designs were acetaminophen content and the ratio of microcrystalline cellulose to lactose monohydrate. Other constituents included croscarmellose sodium and magnesium stearate (content remained constant). Partial least squares-based models were generated using data from individual experimental designs that related acetaminophen content to spectral data. The effect of each experimental design was evaluated by determining the statistical significance of the difference in bias and standard error of the prediction for that model's prediction performance. The calibration model derived from the I-optimal design had similar prediction performance as did the model derived from the 5-L FF design, despite containing 16 fewer design points. It also outperformed all other models estimated from designs with similar or fewer numbers of samples. This suggested that experimental-design selection for calibration-model development is critical, and optimum performance can be achieved with efficient experimental designs (i.e., optimal designs).
Adams, E. W.; Johnston, J. P.
1983-01-01
A mixing-length model is developed for the prediction of turbulent boundary layers with convex streamwise curvature. For large layer thickness ratio, delta/R greater than 0.05, the model scales mixing length on the wall radius of curvature, R. For small delta/R, ordinary flat wall modeling is used for the mixing-length profile with curvature corrections, following the recommendations of Eide and Johnston (1976). Effects of streamwise change of curvature are considered; a strong lag from equilibrium is required when R increases downstream. Fifteen separate data sets were compared, including both hydrodynamic and heat transfer results. Six of these computations are presented and compared to experiment.
Panayidou, Klea; Gsteiger, Sandro; Egger, Matthias; Kilcher, Gablu; Carreras, Máximo; Efthimiou, Orestis; Debray, Thomas P A; Trelle, Sven; Hummel, Noemi
2016-09-01
The performance of a drug in a clinical trial setting often does not reflect its effect in daily clinical practice. In this third of three reviews, we examine the applications that have been used in the literature to predict real-world effectiveness from randomized controlled trial efficacy data. We searched MEDLINE, EMBASE from inception to March 2014, the Cochrane Methodology Register, and websites of key journals and organisations and reference lists. We extracted data on the type of model and predictions, data sources, validation and sensitivity analyses, disease area and software. We identified 12 articles in which four approaches were used: multi-state models, discrete event simulation models, physiology-based models and survival and generalized linear models. Studies predicted outcomes over longer time periods in different patient populations, including patients with lower levels of adherence or persistence to treatment or examined doses not tested in trials. Eight studies included individual patient data. Seven examined cardiovascular and metabolic diseases and three neurological conditions. Most studies included sensitivity analyses, but external validation was performed in only three studies. We conclude that mathematical modelling to predict real-world effectiveness of drug interventions is not widely used at present and not well validated. © 2016 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd.
An Effective Time Series Analysis for Stock Trend Prediction Using ARIMA Model for Nifty Midcap-50
Directory of Open Access Journals (Sweden)
B.Uma Devi
2013-01-01
Full Text Available The data mining and its tool has played a vital role in exploring the data from different ware houses. Using data mining tools and analytical technologies we do a quantifiable amount of research to explore new approach for the investment decisions .The market with huge volume of investor with good enough knowledge and have a prediction as well as control over their investments. The stock market some time fails to attract new investor. The reason states that non-aware and also people don’t want to come forward to fall in to the risk. An approach with adequate expertise is designed to help investors to ascertain veiled patterns from the historic data that have feasible predictive ability in their investment decisions. In this paper the NSE – Nifty Midcap50 companies among them top 4 companies having max Midcap value has been selected for analysis. The historical data has a significant role in, helping the investing people to get an overview about the market behavior during the past decade. The stock data for the past five years has been collected and trained using ARIMA model with different parameters. The test criterions like Akaike Information Criterion Bayesian Information Criterion (AICBIC are applied to predict the accuracy of the model. The performance of the trained model is analyzed and it also tested to find the trend and the market behavior for future forecast.
Shi, Ya-Zhou; Wu, Yuan-Yan; Tan, Zhi-Jie
2014-01-01
To bridge the gap between the sequences and 3-dimensional (3D) structures of RNAs, some computational models have been proposed for predicting RNA 3D structures. However, the existed models seldom consider the conditions departing from the room/body temperature and high salt (1M NaCl), and thus generally hardly predict the thermodynamics and salt effect. In this study, we propose a coarse-grained model with implicit salt for RNAs to predict 3D structures, stability and salt effect. Combined with Monte Carlo simulated annealing algorithm and a coarse-grained force field, the model folds 46 tested RNAs (less than or equal to 45 nt) including pseudoknots into their native-like structures from their sequences, with an overall mean RMSD of 3.5 {\\AA} and an overall minimum RMSD of 1.9 {\\AA} from the experimental structures. For 30 RNA hairpins, the present model also gives the reliable predictions for the stability and salt effect with the mean deviation ~ 1.0 degrees Celsius of melting temperatures, as compared wi...
Panthee, Nirmal; Okada, Jun-ichi; Washio, Takumi; Mochizuki, Youhei; Suzuki, Ryohei; Koyama, Hidekazu; Ono, Minoru; Hisada, Toshiaki; Sugiura, Seiryo
2016-07-01
Despite extensive studies on clinical indices for the selection of patient candidates for cardiac resynchronization therapy (CRT), approximately 30% of selected patients do not respond to this therapy. Herein, we examined whether CRT simulations based on individualized realistic three-dimensional heart models can predict the therapeutic effect of CRT in a canine model of heart failure with left bundle branch block. In four canine models of failing heart with dyssynchrony, individualized three-dimensional heart models reproducing the electromechanical activity of each animal were created based on the computer tomographic images. CRT simulations were performed for 25 patterns of three ventricular pacing lead positions. Lead positions producing the best and the worst therapeutic effects were selected in each model. The validity of predictions was tested in acute experiments in which hearts were paced from the sites identified by simulations. We found significant correlations between the experimentally observed improvement in ejection fraction (EF) and the predicted improvements in ejection fraction (P<0.01) or the maximum value of the derivative of left ventricular pressure (P<0.01). The optimal lead positions produced better outcomes compared with the worst positioning in all dogs studied, although there were significant variations in responses. Variations in ventricular wall thickness among the dogs may have contributed to these responses. Thus CRT simulations using the individualized three-dimensional heart models can predict acute hemodynamic improvement, and help determine the optimal positions of the pacing lead.
Zhu, Ling; Shi, Xinling; Liu, Yajie
2009-02-01
The traditional clinical trail designs always depend on expert opinions and lack statistical evaluations. In this article, we present a method and illustrate how population parameter uncertainty may be incorporated in the overall simulation model. Using the techniques of clinical trail simulation (CTS) and setting up predictions on the basis of pharmacokinetics-pharmacodynamics (PK-PD) models, we advance the modeling methods for simulation, for treatment effects, and for the clinical trail power under the given PK-PD conditions. Then we discuss the model of uncertainty, suggest an ANOVA-based method, add eta2 statistics for sensitivity analysis, and canvass the effect of uncertainty about population parameters on clinical trail power. The results from simulations and the indices derived from this type of sensitivity analysis may be used for grading the influence on the prediction quality of uncertainty about different population parameters. The experiment results are satisfactory and the approach presented has practical value in clinical trails.
Smith, Kathleen S.; Ranville, James F.; Adams, M.; Choate, LaDonna M.; Church, Stan E.; Fey, David L.; Wanty, Richard B.; Crock, James G.
2006-01-01
The chemical speciation of metals influences their biological effects. The Biotic Ligand Model (BLM) is a computational approach to predict chemical speciation and acute toxicological effects of metals on aquatic biota. Recently, the U.S. Environmental Protection Agency incorporated the BLM into their regulatory water-quality criteria for copper. Results from three different laboratory copper toxicity tests were compared with BLM predictions for simulated test-waters. This was done to evaluate the ability of the BLM to accurately predict the effects of hardness and concentrations of dissolved organic carbon (DOC) and iron on aquatic toxicity. In addition, we evaluated whether the BLM and the three toxicity tests provide consistent results. Comparison of BLM predictions with two types of Ceriodaphnia dubia toxicity tests shows that there is fairly good agreement between predicted LC50 values computed by the BLM and LC50 values determined from the two toxicity tests. Specifically, the effect of increasing calcium concentration (and hardness) on copper toxicity appears to be minimal. Also, there is fairly good agreement between the BLM and the two toxicity tests for test solutions containing elevated DOC, for which the LC50 is 3-to-5 times greater (less toxic) than the LC50 for the lower-DOC test water. This illustrates the protective effects of DOC on copper toxicity and demonstrates the ability of the BLM to predict these protective effects. In contrast, for test solutions with added iron there is a decrease in LC50 values (increase in toxicity) in results from the two C. dubia toxicity tests, and the agreement between BLM LC50 predictions and results from these toxicity tests is poor. The inability of the BLM to account for competitive iron binding to DOC or DOC fractionation may be a significant shortcoming of the BLM for predicting site- specific water-quality criteria in streams affected by iron-rich acidic drainage in mined and mineralized areas.
Darby, S C; Pike, M. C.
1988-01-01
Epidemiological studies of active smokers have shown that the duration of smoking has a much greater effect on lung cancer risk than the amount smoked. This observation suggests that passive smoking might be much more harmful than would be predicted from measures of the level of exposure alone, as it is often of very long duration frequently beginning in early childhood. In this paper we have investigated this using a multistage model with five stages. The model is shown to provide an excelle...
Improved modeling of GaN HEMTs for predicting thermal and trapping-induced-kink effects
Jarndal, Anwar; Ghannouchi, Fadhel M.
2016-09-01
In this paper, an improved modeling approach has been developed and validated for GaN high electron mobility transistors (HEMTs). The proposed analytical model accurately simulates the drain current and its inherent trapping and thermal effects. Genetic-algorithm-based procedure is developed to automatically find the fitting parameters of the model. The developed modeling technique is implemented on a packaged GaN-on-Si HEMT and validated by DC and small-/large-signal RF measurements. The model is also employed for designing and realizing a switch-mode inverse class-F power amplifier. The amplifier simulations showed a very good agreement with RF large-signal measurements.
Moreno, Jonathan D; Zhu, Z Iris; Yang, Pei-Chi; Bankston, John R; Jeng, Mao-Tsuen; Kang, Chaoyi; Wang, Lianguo; Bayer, Jason D; Christini, David J; Trayanova, Natalia A; Ripplinger, Crystal M; Kass, Robert S; Clancy, Colleen E
2011-08-31
A long-sought, and thus far elusive, goal has been to develop drugs to manage diseases of excitability. One such disease that affects millions each year is cardiac arrhythmia, which occurs when electrical impulses in the heart become disordered, sometimes causing sudden death. Pharmacological management of cardiac arrhythmia has failed because it is not possible to predict how drugs that target cardiac ion channels, and have intrinsically complex dynamic interactions with ion channels, will alter the emergent electrical behavior generated in the heart. Here, we applied a computational model, which was informed and validated by experimental data, that defined key measurable parameters necessary to simulate the interaction kinetics of the anti-arrhythmic drugs flecainide and lidocaine with cardiac sodium channels. We then used the model to predict the effects of these drugs on normal human ventricular cellular and tissue electrical activity in the setting of a common arrhythmia trigger, spontaneous ventricular ectopy. The model forecasts the clinically relevant concentrations at which flecainide and lidocaine exacerbate, rather than ameliorate, arrhythmia. Experiments in rabbit hearts and simulations in human ventricles based on magnetic resonance images validated the model predictions. This computational framework initiates the first steps toward development of a virtual drug-screening system that models drug-channel interactions and predicts the effects of drugs on emergent electrical activity in the heart.
Shi, J Q; Wang, B; Will, E J; West, R M
2012-11-20
We propose a new semiparametric model for functional regression analysis, combining a parametric mixed-effects model with a nonparametric Gaussian process regression model, namely a mixed-effects Gaussian process functional regression model. The parametric component can provide explanatory information between the response and the covariates, whereas the nonparametric component can add nonlinearity. We can model the mean and covariance structures simultaneously, combining the information borrowed from other subjects with the information collected from each individual subject. We apply the model to dose-response curves that describe changes in the responses of subjects for differing levels of the dose of a drug or agent and have a wide application in many areas. We illustrate the method for the management of renal anaemia. An individual dose-response curve is improved when more information is included by this mechanism from the subject/patient over time, enabling a patient-specific treatment regime.
Dumouchel, T; McCall, M; Lemay, F; Bennett, L; Lewis, B; Bean, M
2016-12-01
The Predictive Code for Aircrew Radiation Exposure (PCAIRE) is a semi-empirical code that estimates both ambient dose equivalent, based on years of on-board measurements, and effective dose to aircrew. Currently, PCAIRE estimates effective dose by converting the ambient dose equivalent to effective dose (E/H) using a model that is based on radiation transport calculations and on the radiation weighting factors recommended in International Commission on Radiological Protection (ICRP) 60. In this study, a new semi-empirical E/H model is proposed to replace the existing transport calculation models. The new model is based on flight data measured using a tissue-equivalent proportional counter (TEPC). The measured flight TEPC data are separated into a low- and a high-lineal-energy spectrum using an amplitude-weighted (137)Cs TEPC spectrum. The high-lineal-energy spectrum is determined by subtracting the low-lineal-energy spectrum from the measured flight TEPC spectrum. With knowledge of E/H for the low- and high-lineal-energy spectra, the total E/H is estimated for a given flight altitude and geographic location. The semi-empirical E/H model also uses new radiation weighting factors to align the model with the most recent ICRP 103 recommendations. The ICRP 103-based semi-empirical effective dose model predicts that there is a ∼30 % reduction in dose in comparison with the ICRP 60-based model. Furthermore, the ambient dose equivalent is now a more conservative dose estimate for jet aircraft altitudes in the range of 7-13 km (FL230-430). This new semi-empirical E/H model is validated against E/H predicted from a Monte Carlo N-Particle transport code simulation of cosmic ray propagation through the Earth's atmosphere. Its implementation allows PCAIRE to provide an accurate semi-empirical estimate of the effective dose.
Directory of Open Access Journals (Sweden)
Ina eBornkessel-Schlesewsky
2015-11-01
Full Text Available Hierarchical predictive coding has been identified as a possible unifying principle of brain function, and recent work in cognitive neuroscience has examined how it may be affected by age–related changes. Using language comprehension as a test case, the present study aimed to dissociate age-related changes in prediction generation versus internal model adaptation following a prediction error. Event-related brain potentials (ERPs were measured in a group of older adults (60–81 years; n=40 as they read sentences of the form The opposite of black is white/yellow/nice. Replicating previous work in young adults, results showed a target-related P300 for the expected antonym (white; an effect assumed to reflect a prediction match, and a graded N400 effect for the two incongruous conditions (i.e. a larger N400 amplitude for the incongruous continuation not related to the expected antonym, nice, versus the incongruous associated condition, yellow. These effects were followed by a late positivity, again with a larger amplitude in the incongruous non-associated versus incongruous associated condition. Analyses using linear mixed-effects models showed that the target-related P300 effect and the N400 effect for the incongruous non-associated condition were both modulated by age, thus suggesting that age-related changes affect both prediction generation and model adaptation. However, effects of age were outweighed by the interindividual variability of ERP responses, as reflected in the high proportion of variance captured by the inclusion of by-condition random slopes for participants and items. We thus argue that – at both a neurophysiological and a functional level – the notion of general differences between language processing in young and older adults may only be of limited use, and that future research should seek to better understand the causes of interindividual variability in the ERP responses of older adults and its relation to cognitive
The effect of non-radial motions on the CDM model predictions
Popolo, A D
1998-01-01
In this paper we show how non-radial motions, originating from the tidal interaction of the irregular mass distribution within and around protoclusters, can solve some of the problems of the CDM model. Firstly the discrepancy between the CDM predicted two-points correlation function of clusters and the observed one. We compare the two-points correlation function, that we obtain taking account of non-radial motions, with that obtained by Sutherland & Efstathiou (1991) from the analysis of Geller & Hucra's (1988) deep redshift survey and with the data points for the APM clusters obtained by Efstathiou et al. (1992). Secondly the problem of the X-ray clusters abundance over-production predicted by the CDM model. In this case we compare the X-ray temperature distribution function, calculated using Press-Schechter theory and Evrard's (1990) prescriptions for the mass-temperature relation, taking also account of the non-radial motions, with Henry & Arnaud (1991) and Edge et al. (1990) X-ray temperature ...
Applying risk and resilience models to predicting the effects of media violence on development.
Prot, Sara; Gentile, Douglas A
2014-01-01
Although the effects of media violence on children and adolescents have been studied for over 50 years, they remain controversial. Much of this controversy is driven by a misunderstanding of causality that seeks the cause of atrocities such as school shootings. Luckily, several recent developments in risk and resilience theories offer a way out of this controversy. Four risk and resilience models are described, including the cascade model, dose-response gradients, pathway models, and turning-point models. Each is described and applied to the existing media effects literature. Recommendations for future research are discussed with regard to each model. In addition, we examine current developments in theorizing that stressors have sensitizing versus steeling effects and recent interest in biological and gene by environment interactions. We also discuss several of the cultural aspects that have supported the polarization and misunderstanding of the literature, and argue that applying risk and resilience models to the theories and data offers a more balanced way to understand the subtle effects of media violence on aggression within a multicausal perspective.
Pirdavani, Ali; Brijs, Tom; Bellemans, Tom; Kochan, Bruno; Wets, Geert
2013-01-01
Travel demand management (TDM) consists of a variety of policy measures that affect the transportation system's effectiveness by changing travel behavior. The primary objective to implement such TDM strategies is not to improve traffic safety, although their impact on traffic safety should not be neglected. The main purpose of this study is to evaluate the traffic safety impact of conducting a fuel-cost increase scenario (i.e. increasing the fuel price by 20%) in Flanders, Belgium. Since TDM strategies are usually conducted at an aggregate level, crash prediction models (CPMs) should also be developed at a geographically aggregated level. Therefore zonal crash prediction models (ZCPMs) are considered to present the association between observed crashes in each zone and a set of predictor variables. To this end, an activity-based transportation model framework is applied to produce exposure metrics which will be used in prediction models. This allows us to conduct a more detailed and reliable assessment while TDM strategies are inherently modeled in the activity-based models unlike traditional models in which the impact of TDM strategies are assumed. The crash data used in this study consist of fatal and injury crashes observed between 2004 and 2007. The network and socio-demographic variables are also collected from other sources. In this study, different ZCPMs are developed to predict the number of injury crashes (NOCs) (disaggregated by different severity levels and crash types) for both the null and the fuel-cost increase scenario. The results show a considerable traffic safety benefit of conducting the fuel-cost increase scenario apart from its impact on the reduction of the total vehicle kilometers traveled (VKT). A 20% increase in fuel price is predicted to reduce the annual VKT by 5.02 billion (11.57% of the total annual VKT in Flanders), which causes the total NOCs to decline by 2.83%.
Wang, Junjian; Kang, Qinjun; Rahman, Sheik S
2016-01-01
Gas flow in shale is associated with both organic matter (OM) and inorganic matter (IOM) which contain nanopores ranging in size from a few to hundreds of nanometers. In addition to the noncontinuum effect which leads to an apparent permeability of gas higher than the intrinsic permeability, the surface diffusion of adsorbed gas in organic pores also can influence the apparent permeability through its own transport mechanism. In this study, a generalized lattice Boltzmann model (GLBM) is employed for gas flow through the reconstructed shale matrix consisting of OM and IOM. The Expectation-Maximization (EM) algorithm is used to assign the pore size distribution to each component, and the dusty gas model (DGM) and generalized Maxwell-Stefan model (GMS) are adopted to calculate the apparent permeability accounting for multiple transport mechanisms including viscous flow, Knudsen diffusion and surface diffusion. Effects of pore radius and pressure on permeability of both IOM and OM as well as effects of Langmuir ...
Fogaça, Manoela V.; Gomes, Felipe V.; Silva, Nicole Rodrigues; Pedrazzi, João Francisco; Del Bel, Elaine A.; Hallak, Jaime C.; Crippa, José A.; Zuardi, Antonio W.; Guimarães, Francisco S.
2016-01-01
Cannabidiol (CBD) is a major Cannabis sativa constituent, which does not cause the typical marijuana psychoactivity. However, it has been shown to be active in a numerous pharmacological assays, including mice tests for anxiety, obsessive-compulsive disorder, depression and schizophrenia. In human trials the doses of CBD needed to achieve effects in anxiety and schizophrenia are high. We report now the synthesis of 3 fluorinated CBD derivatives, one of which, 4'-F-CBD (HUF-101) (1), is considerably more potent than CBD in behavioral assays in mice predictive of anxiolytic, antidepressant, antipsychotic and anti-compulsive activity. Similar to CBD, the anti-compulsive effects of HUF-101 depend on cannabinoid receptors. PMID:27416026
Directory of Open Access Journals (Sweden)
Aviva Breuer
Full Text Available Cannabidiol (CBD is a major Cannabis sativa constituent, which does not cause the typical marijuana psychoactivity. However, it has been shown to be active in a numerous pharmacological assays, including mice tests for anxiety, obsessive-compulsive disorder, depression and schizophrenia. In human trials the doses of CBD needed to achieve effects in anxiety and schizophrenia are high. We report now the synthesis of 3 fluorinated CBD derivatives, one of which, 4'-F-CBD (HUF-101 (1, is considerably more potent than CBD in behavioral assays in mice predictive of anxiolytic, antidepressant, antipsychotic and anti-compulsive activity. Similar to CBD, the anti-compulsive effects of HUF-101 depend on cannabinoid receptors.
Numerical weather prediction model tuning via ensemble prediction system
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
Marzolini, Catia; Rajoli, Rajith; Battegay, Manuel; Elzi, Luigia; Back, David; Siccardi, Marco
2017-04-01
Antiretroviral drugs are among the therapeutic agents with the highest potential for drug-drug interactions (DDIs). In the absence of clinical data, DDIs are mainly predicted based on preclinical data and knowledge of the disposition of individual drugs. Predictions can be challenging, especially when antiretroviral drugs induce and inhibit multiple cytochrome P450 (CYP) isoenzymes simultaneously. This study predicted the magnitude of the DDI between efavirenz, an inducer of CYP3A4 and inhibitor of CYP2C8, and dual CYP3A4/CYP2C8 substrates (repaglinide, montelukast, pioglitazone, paclitaxel) using a physiologically based pharmacokinetic (PBPK) modeling approach integrating concurrent effects on CYPs. In vitro data describing the physicochemical properties, absorption, distribution, metabolism, and elimination of efavirenz and CYP3A4/CYP2C8 substrates as well as the CYP-inducing and -inhibitory potential of efavirenz were obtained from published literature. The data were integrated in a PBPK model developed using mathematical descriptions of molecular, physiological, and anatomical processes defining pharmacokinetics. Plasma drug-concentration profiles were simulated at steady state in virtual individuals for each drug given alone or in combination with efavirenz. The simulated pharmacokinetic parameters of drugs given alone were compared against existing clinical data. The effect of efavirenz on CYP was compared with published DDI data. The predictions indicate that the overall effect of efavirenz on dual CYP3A4/CYP2C8 substrates is induction of metabolism. The magnitude of induction tends to be less pronounced for dual CYP3A4/CYP2C8 substrates with predominant CYP2C8 metabolism. PBPK modeling constitutes a useful mechanistic approach for the quantitative prediction of DDI involving simultaneous inducing or inhibitory effects on multiple CYPs as often encountered with antiretroviral drugs.
Directory of Open Access Journals (Sweden)
R. Larsson
2015-10-01
Full Text Available We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high altitude Special Sensor Microwave Imager/Sounder channels 19–22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively and the expected profile errors at the affected altitudes (estimated to be around 5 K. For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Same channel, there is 1.2 K in average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Same channel, there is 1.3 K in average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K and smaller standard deviations (at below 0.4 K when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies causing up to ± 7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better
An effective finite element model for the prediction of hydrogen induced cracking in steel pipelines
Traidia, Abderrazak
2012-11-01
This paper presents a comprehensive finite element model for the numerical simulation of Hydrogen Induced Cracking (HIC) in steel pipelines exposed to sulphurous compounds, such as hydrogen sulphide (H2S). The model is able to mimic the pressure build-up mechanism related to the recombination of atomic hydrogen into hydrogen gas within the crack cavity. In addition, the strong couplings between non-Fickian hydrogen diffusion, pressure build-up and crack extension are accounted for. In order to enhance the predictive capabilities of the proposed model, problem boundary conditions are based on actual in-field operating parameters, such as pH and partial pressure of H 2S. The computational results reported herein show that, during the extension phase, the propagating crack behaves like a trap attracting more hydrogen, and that the hydrostatic stress field at the crack tip speed-up HIC related crack initiation and growth. In addition, HIC is reduced when the pH increases and the partial pressure of H2S decreases. Furthermore, the relation between the crack growth rate and (i) the initial crack radius and position, (ii) the pipe wall thickness and (iii) the fracture toughness, is also evaluated. Numerical results agree well with experimental data retrieved from the literature. Copyright © 2012, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.
The effect of loudness on the reverberance of music: reverberance prediction using loudness models.
Lee, Doheon; Cabrera, Densil; Martens, William L
2012-02-01
This study examines the auditory attribute that describes the perceived amount of reverberation, known as "reverberance." Listening experiments were performed using two signals commonly heard in auditoria: excerpts of orchestral music and western classical singing. Listeners adjusted the decay rate of room impulse responses prior to convolution with these signals, so as to match the reverberance of each stimulus to that of a reference stimulus. The analysis examines the hypothesis that reverberance is related to the loudness decay rate of the underlying room impulse response. This hypothesis is tested using computational models of time varying or dynamic loudness, from which parameters analogous to conventional reverberation parameters (early decay time and reverberation time) are derived. The results show that listening level significantly affects reverberance, and that the loudness-based parameters outperform related conventional parameters. Results support the proposed relationship between reverberance and the computationally predicted loudness decay function of sound in rooms.
Xiao, Sa; Callaway, Ragan M; Newcombe, George; Aschehoug, Erik T
2012-12-01
Understanding the role of competition in the organization of communities is limited in part by the difficulty of extrapolating the outcomes of small-scale experiments to how such outcomes might affect the distribution and abundance of species. We modeled the community-level outcomes of competition, using experimentally derived competitive effects and responses between an exotic invasive plant, Centaurea stoebe, and species from both its native and nonnative ranges and using changes in these effects and responses elicited by experimentally establishing symbioses between C. stoebe and fungal endophytes. Using relative interaction intensities (RIIs) and holding other life-history factors constant, individual-based and spatially explicit models predicted competitive exclusion of all but one North American species but none of the European species, regardless of the endophyte status of C. stoebe. Concomitantly, C. stoebe was eliminated from the models with European natives but was codominant in models with North American natives. Endophyte symbiosis predicted increased dominance of C. stoebe in North American communities but not in European communities. However, when experimental variation was included, some of the model outcomes changed slightly. Our results are consistent with the idea that the effects of competitive intensity and mutualisms measured at small scales have the potential to play important roles in determining the larger-scale outcomes of invasion and that the stabilizing indirect effects of competition may promote species coexistence.
Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy
2008-01-01
Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...
Case studies in archaeological predictive modelling
Verhagen, Jacobus Wilhelmus Hermanus Philippus
2007-01-01
In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing p
Reallocation in modal aerosol models: impacts on predicting aerosol radiative effects
Directory of Open Access Journals (Sweden)
T. Korhola
2013-08-01
Full Text Available In atmospheric modelling applications the aerosol particle size distribution is commonly represented by modal approach, in which particles in different size ranges are described with log-normal modes within predetermined size ranges. Such method includes numerical reallocation of particles from a mode to another for example during particle growth, leading to potentially artificial changes in the aerosol size distribution. In this study we analysed how this reallocation affects climatologically relevant parameters: cloud droplet number concentration, aerosol-cloud interaction coefficient and light extinction coefficient. We compared these parameters between a modal model with and without reallocation routines, and a high resolution sectional model that was considered as a reference model. We analysed the relative differences of the parameters in different experiments that were designed to cover a wide range of dynamic aerosol processes occurring in the atmosphere. According to our results, limiting the allowed size ranges of the modes and the following numerical remapping of the distribution by reallocation, leads on average to underestimation of cloud droplet number concentration (up to 100% and overestimation of light extinction (up to 20%. The analysis of aerosol first indirect effect is more complicated as the ACI parameter can be either over- or underestimated by the reallocating model, depending on the conditions. However, for example in the case of atmospheric new particle formation events followed by rapid particle growth, the reallocation can cause around average 10% overestimation of the ACI parameter. Thus it is shown that the reallocation affects the ability of a model to estimate aerosol climate effects accurately, and this should be taken into account when using and developing aerosol models.
Kassemi, Mohammad; Thompson, David
2016-09-01
An analytic Population Balance Equation model is used to assess the efficacy of citrate, pyrophosphate, and augmented fluid intake as dietary countermeasures aimed at reducing the risk of renal stone formation for astronauts. The model uses the measured biochemical profile of the astronauts as input and predicts the steady-state size distribution of the nucleating, growing, and agglomerating renal calculi subject to biochemical changes brought about by administration of these dietary countermeasures. Numerical predictions indicate that an increase in citrate levels beyond its average normal ground-based urinary values is beneficial but only to a limited extent. Unfortunately, results also indicate that any decline in the citrate levels during space travel below its normal urinary values on Earth can easily move the astronaut into the stone-forming risk category. Pyrophosphate is found to be an effective inhibitor since numerical predictions indicate that even at quite small urinary concentrations, it has the potential of shifting the maximum crystal aggregate size to a much smaller and plausibly safer range. Finally, our numerical results predict a decline in urinary volume below 1.5 liters/day can act as a dangerous promoter of renal stone development in microgravity while urinary volume levels of 2.5-3 liters/day can serve as effective space countermeasures.
Chambers, Ute; Jones, Vincent P
2015-12-01
Orchard design and management practices can alter microclimate and, thus, potentially affect insect development. If sufficiently large, these deviations in microclimate can compromise the accuracy of phenology model predictions used in integrated pest management (IPM) programs. Sunburn causes considerable damage in the Pacific Northwest, United States, apple-producing region. Common prevention strategies include the use of fruit surface protectants, evaporative cooling (EC), or both. This study focused on the effect of EC on ambient temperatures and model predictions for four insects (codling moth, Cydia pomonella L.; Lacanobia fruitworm, Lacanobia subjuncta Grote and Robinson; oblique-banded leafroller, Choristoneura rosaceana Harris; and Pandemis leafroller, Pandemis pyrusana Kearfott). Over-tree EC was applied in July and August when daily maximum temperatures were predicted to be ≥30°C between 1200-1700 hours (15/15 min on/off interval) in 2011 and between 1200-1800 hours (15/10 min on/off interval, or continuous on) in 2012. Control plots were sprayed once with kaolin clay in early July. During interval and continuous cooling, over-tree cooling reduced average afternoon temperatures compared with the kaolin treatment by 2.1-3.2°C. Compared with kaolin-treated controls, codling moth and Lacanobia fruitworm egg hatch in EC plots was predicted to occur up to 2 d and 1 d late, respectively. The presence of fourth-instar oblique-banded leafroller and Pandemis leafroller was predicted to occur up to 2 d and 1 d earlier in EC plots, respectively. These differences in model predictions were negligible, suggesting that no adjustments in pest management timing are needed when using EC in high-density apple orchards.
Belzung, Catherine
2014-04-01
Over recent decades, encouraging preclinical evidence using rodent models pointed to innovative pharmacological targets to treat major depressive disorder. However, subsequent clinical trials have failed to show convincing results. Two explanations for these rather disappointing results can be put forward, either animal models of psychiatric disorders have failed to predict the clinical effectiveness of treatments or clinical trials have failed to detect the effects of these new drugs. A careful analysis of the literature reveals that both statements are true. Indeed, in some cases, clinical efficacy has been predicted on the basis of inappropriate animal models, although the contrary is also true, as some clinical trials have not targeted the appropriate dose or clinical population. On the one hand, refinement of animal models requires using species that have better homological validity, designing models that rely on experimental manipulations inducing pathological features, and trying to model subtypes of depression. On the other hand, clinical research should consider carefully the results from preclinical studies, in order to study these compounds at the correct dose, in the appropriate psychiatric nosological entity or symptomatology, in relevant subpopulations of patients characterized by specific biomarkers. To achieve these goals, translational research has to strengthen the dialogue between basic and clinical science.
Zephyr - the prediction models
DEFF Research Database (Denmark)
Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg
2001-01-01
This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Dani...
Xu, Yifang; Collins, Leslie M
2004-04-01
The incorporation of low levels of noise into an electrical stimulus has been shown to improve auditory thresholds in some human subjects (Zeng et al., 2000). In this paper, thresholds for noise-modulated pulse-train stimuli are predicted utilizing a stochastic neural-behavioral model of ensemble fiber responses to bi-phasic stimuli. The neural refractory effect is described using a Markov model for a noise-free pulse-train stimulus and a closed-form solution for the steady-state neural response is provided. For noise-modulated pulse-train stimuli, a recursive method using the conditional probability is utilized to track the neural responses to each successive pulse. A neural spike count rule has been presented for both threshold and intensity discrimination under the assumption that auditory perception occurs via integration over a relatively long time period (Bruce et al., 1999). An alternative approach originates from the hypothesis of the multilook model (Viemeister and Wakefield, 1991), which argues that auditory perception is based on several shorter time integrations and may suggest an NofM model for prediction of pulse-train threshold. This motivates analyzing the neural response to each individual pulse within a pulse train, which is considered to be the brief look. A logarithmic rule is hypothesized for pulse-train threshold. Predictions from the multilook model are shown to match trends in psychophysical data for noise-free stimuli that are not always matched by the long-time integration rule. Theoretical predictions indicate that threshold decreases as noise variance increases. Theoretical models of the neural response to pulse-train stimuli not only reduce calculational overhead but also facilitate utilization of signal detection theory and are easily extended to multichannel psychophysical tasks.
Directory of Open Access Journals (Sweden)
Zhu-Hong You
2017-03-01
Full Text Available In the recent few years, an increasing number of studies have shown that microRNAs (miRNAs play critical roles in many fundamental and important biological processes. As one of pathogenetic factors, the molecular mechanisms underlying human complex diseases still have not been completely understood from the perspective of miRNA. Predicting potential miRNA-disease associations makes important contributions to understanding the pathogenesis of diseases, developing new drugs, and formulating individualized diagnosis and treatment for diverse human complex diseases. Instead of only depending on expensive and time-consuming biological experiments, computational prediction models are effective by predicting potential miRNA-disease associations, prioritizing candidate miRNAs for the investigated diseases, and selecting those miRNAs with higher association probabilities for further experimental validation. In this study, Path-Based MiRNA-Disease Association (PBMDA prediction model was proposed by integrating known human miRNA-disease associations, miRNA functional similarity, disease semantic similarity, and Gaussian interaction profile kernel similarity for miRNAs and diseases. This model constructed a heterogeneous graph consisting of three interlinked sub-graphs and further adopted depth-first search algorithm to infer potential miRNA-disease associations. As a result, PBMDA achieved reliable performance in the frameworks of both local and global LOOCV (AUCs of 0.8341 and 0.9169, respectively and 5-fold cross validation (average AUC of 0.9172. In the cases studies of three important human diseases, 88% (Esophageal Neoplasms, 88% (Kidney Neoplasms and 90% (Colon Neoplasms of top-50 predicted miRNAs have been manually confirmed by previous experimental reports from literatures. Through the comparison performance between PBMDA and other previous models in case studies, the reliable performance also demonstrates that PBMDA could serve as a powerful
Clausse, B.; Lhémery, A.; Walaszek, H.
2017-01-01
An Electro-Magnetic Acoustic Transducer (EMAT) is a non-contact source used in Ultrasonic Testing (UT) which generates three types of dynamic excitations into a ferromagnetic part: Lorentz force, magnetisation force, and magnetostrictive effect. This latter excitation is a strain resulting from a magnetoelastic interaction between the external magnetic field and the mechanical part. Here, a tensor model is developed to transform this effect into an equivalent body force. It assumes weak magnetoelastic coupling and a dynamic magnetic field much smaller than the static one. This approach rigorously formulates the longitudinal Joule’s magnetostriction, and makes it possible to deal with arbitrary material geometries and EMAT configurations. Transduction processes induced by an EMAT in ferromagnetic media are then modelled as equivalent body forces. But many models developed for efficiently predicting ultrasonic field radiation in solids assume source terms given as surface distributions of stress. To use these models, a mathematical method able to accurately transform these body forces into equivalent surface stresses has been developed. By combining these formalisms, the magnetostrictive strain is transformed into equivalent surface stresses, and the ultrasonic field radiated by magnetostrictive effects induced by an EMAT can be both accurately and efficiently predicted. Numerical examples are given for illustration.
Confidence scores for prediction models
DEFF Research Database (Denmark)
Gerds, Thomas Alexander; van de Wiel, MA
2011-01-01
modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...
Directory of Open Access Journals (Sweden)
Ilker Ercanli
2015-06-01
Full Text Available Diameter at breast height (DBH is the simplest, most common and most important tree dimension in forest inventory and is closely correlated with wood volume, height and biomass. In this study, a number of linear and nonlinear models predicting diameter at breast height from stump diameter were developed and evaluated for Oriental beech (Fagus orientalisLipsky stands located in the forest region of Ayancık, in the northeast of Turkey. A set of 1,501 pairs of diameter at breast height-stump measurements, originating from 70 sample plots of even-aged Oriental beech stands, were used in this study. About 80 % of the otal data (1,160 trees in 55 sample plots was used to fit a number of linear and nonlinear model parameters; the remaining 341 trees in 15 sample plots were randomly reserved for model validation and calibration response. The power model data set was found to produce the most satisfactory fits with the Adjusted Coefficient of Determination, R2adj (0.990, Root Mean Square Error, RMSE (1.25, Akaike’s Information Criterion, AIC (3820.5, Schwarz’s Bayesian Information Criterion, BIC (3837.2, and Absolute Bias (1.25. The nonlinear mixed-effect modeling approach for power model with R2adj(0.993, AIC (3598, BIC (3610.1, Absolute Bias (0.73 and RMSE (1.04 provided much better fitting and precise predictions for DBH from stump diameter than the conventional nonlinear fixed effect model structures for this model. The calibration response including tree DBH and stump diameter measurements of the four largest trees in a calibrated sample plot in calibration produced the highest Bias, -5.31 %, and RMSE, -6.30 %, the greatest reduction percentage.
Goring, S. J.; Cogbill, C. V.; Dawson, A.; Hooten, M.; McLachlan, J. S.; Mladenoff, D. J.; Paciorek, C. J.; Ruid, M.; Tipton, J.; Williams, J. W.; Record, S.; Matthes, J. H.; Dietze, M.
2014-12-01
Much of our understanding of the climatic controls on tree species distributions is based on contemporary observational datasets. For example, forest inventory analysis (FIA) and other spatial datasets are used to build correlative models of climate suitability for plant taxa for use in environmental niche models. More complex dynamic models rely on species interactions, physiological processes, and competition, among other processes, that are also parameterized against contemporary data. However, as much as a quarter of the forested region in the upper Midwestern United States may be considered novel relative to pre-settlement baselines (Goring et al. submitted). Hence, modern surveys or even long-term datasets may represent only a portion of the ecological or climate space taxa might occupy. Using gridded datasets of pre-settlement vegetation for the northeastern United States from Town Propritor Suveys and the Public Land Survey, we examine the effects of European land-use conversion - logging, agricultural conversion and re-establishment - on climate-vegetation relationships. We show that in regions where land-use change is climatically biased, such as conversion to agriculture along the prairie-forest boundary, impacts on the realized climatic niches for various tree taxa can be significant. Improving predicted distributions of taxa is critical for planning and mitigating the effects of widespread shifts in forest composition resulting from climate change. Using pre-settlement data can improve our understanding of the potential niches occupied by major forest taxa, improving the predictive abilities of environmental niche and mechanistic models.
Modelling, controlling, predicting blackouts
Wang, Chengwei; Baptista, Murilo S
2016-01-01
The electric power system is one of the cornerstones of modern society. One of its most serious malfunctions is the blackout, a catastrophic event that may disrupt a substantial portion of the system, playing havoc to human life and causing great economic losses. Thus, understanding the mechanisms leading to blackouts and creating a reliable and resilient power grid has been a major issue, attracting the attention of scientists, engineers and stakeholders. In this paper, we study the blackout problem in power grids by considering a practical phase-oscillator model. This model allows one to simultaneously consider different types of power sources (e.g., traditional AC power plants and renewable power sources connected by DC/AC inverters) and different types of loads (e.g., consumers connected to distribution networks and consumers directly connected to power plants). We propose two new control strategies based on our model, one for traditional power grids, and another one for smart grids. The control strategie...
Melanoma Risk Prediction Models
Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Model-based fault diagnosis framework for effective predictive maintenance / B.B. Akindele
Akindele, Babatunde Babajide
2010-01-01
Predictive maintenance is a proactive maintenance strategy that is aimed at preventing the unexpected failure of equipment through condition monitoring of the health and performance of the equipment. Incessant equipment outage resulting in low availability of production facilities is a major issue in the Nigerian manufacturing environment. Improving equipment availability in Nigeria industry through institution of a full featured predictive maintenance has been suggested by many authors. T...
Prediction models in complex terrain
DEFF Research Database (Denmark)
Marti, I.; Nielsen, Torben Skov; Madsen, Henrik
2001-01-01
are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production......The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... and HIRLAM predictions. The statistical models belong to the class of conditional parametric models. The models are estimated using local polynomial regression, but the estimation method is here extended to be adaptive in order to allow for slow changes in the system e.g. caused by the annual variations...
Directory of Open Access Journals (Sweden)
Mingjun Wang
Full Text Available Single amino acid variants (SAVs are the most abundant form of known genetic variations associated with human disease. Successful prediction of the functional impact of SAVs from sequences can thus lead to an improved understanding of the underlying mechanisms of why a SAV may be associated with certain disease. In this work, we constructed a high-quality structural dataset that contained 679 high-quality protein structures with 2,048 SAVs by collecting the human genetic variant data from multiple resources and dividing them into two categories, i.e., disease-associated and neutral variants. We built a two-stage random forest (RF model, termed as FunSAV, to predict the functional effect of SAVs by combining sequence, structure and residue-contact network features with other additional features that were not explored in previous studies. Importantly, a two-step feature selection procedure was proposed to select the most important and informative features that contribute to the prediction of disease association of SAVs. In cross-validation experiments on the benchmark dataset, FunSAV achieved a good prediction performance with the area under the curve (AUC of 0.882, which is competitive with and in some cases better than other existing tools including SIFT, SNAP, Polyphen2, PANTHER, nsSNPAnalyzer and PhD-SNP. The sourcecodes of FunSAV and the datasets can be downloaded at http://sunflower.kuicr.kyoto-u.ac.jp/sjn/FunSAV.
Energy Technology Data Exchange (ETDEWEB)
Romero-Garcia, V [Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Cientificas (Spain); Sanchez-Perez, J V [Centro de Tecnologias Fisicas: Acustica, Materiales y Astrofisica, Universidad Politecnica de Valencia (Spain); Garcia-Raffi, L M, E-mail: virogar1@gmail.com [Instituto Universitario de Matematica Pura y Aplicada, Universidad Politecnica de Valencia (Spain)
2011-07-06
The use of sonic crystals (SCs) as environmental noise barriers has certain advantages from both the acoustical and the constructive points of view with regard to conventional ones. However, the interaction between the SCs and the ground has not been studied yet. In this work we are reporting a semi-analytical model, based on the multiple scattering theory and on the method of images, to study this interaction considering the ground as a finite impedance surface. The results obtained here show that this model could be used to design more effective noise barriers based on SCs because the excess attenuation of the ground could be modelled in order to improve the attenuation properties of the array of scatterers. The results are compared with experimental data and numerical predictions thus finding good agreement between them.
Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P
2017-09-15
Major end users of Digital Soil Mapping (DSM) such as policy makers and agricultural extension workers are faced with choosing the appropriate remote sensing data. The objective of this research is to analyze the spatial resolution effects of different remote sensing images on soil prediction models in two smallholder farms in Southern India called Kothapally (Telangana State), and Masuti (Karnataka State), and provide empirical guidelines to choose the appropriate remote sensing images in DSM. Bayesian kriging (BK) was utilized to characterize the spatial pattern of exchangeable potassium (Kex) in the topsoil (0-15 cm) at different spatial resolutions by incorporating spectral indices from Landsat 8 (30 m), RapidEye (5 m), and WorldView-2/GeoEye-1/Pleiades-1A images (2 m). Some spectral indices such as band reflectances, band ratios, Crust Index and Atmospherically Resistant Vegetation Index from multiple images showed relatively strong correlations with soil Kex in two study areas. The research also suggested that fine spatial resolution WorldView-2/GeoEye-1/Pleiades-1A-based and RapidEye-based soil prediction models would not necessarily have higher prediction performance than coarse spatial resolution Landsat 8-based soil prediction models. The end users of DSM in smallholder farm settings need select the appropriate spectral indices and consider different factors such as the spatial resolution, band width, spectral resolution, temporal frequency, cost, and processing time of different remote sensing images. Overall, remote sensing-based Digital Soil Mapping has potential to be promoted to smallholder farm settings all over the world and help smallholder farmers implement sustainable and field-specific soil nutrient management scheme. Copyright © 2017 Elsevier Ltd. All rights reserved.
Institute of Scientific and Technical Information of China (English)
WANG Zhan-zhi; XIONG Ying
2013-01-01
A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency,torque balance,low fuel consumption,low cavitations,low noise performance and low hull vibration.Compared with the single-screw system,it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system.The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model.The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center.Compared with the experimental data,it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers,and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers,while a relatively large time step size is a better choice for CRPs with different blade numbers.
Institute of Scientific and Technical Information of China (English)
WANG ZhiHeng; XI Guang
2008-01-01
In this paper three perfect gas models with constant specific heat or with variable specific heat and one real gas model based on the gas property tables are respec-tively considered to implement into the three-dimensional CFD (computational fluid dynamics) analysis of a centrifugal refrigeration compressor stage. The results show that the gas models applied to the CFD code have significant influences on the performance of stage and the flow structures in the stage. Although the ther-modynamics operating condition of evolving fluid in the centrifugal refrigeration compressor has a significant deviation from the perfect gas, the perfect gas model with the modified value of gas constant and the variable specific heat offers a good prediction of stage performance. To predict some basic fluid flow parameters and flow structure accurately, the real gas effects should be considered and the rea-sonably accurate thermodynamic properties based on the analytical gas equation of state or numerical interpolation of gas tables should be applied to the CFD code.
Institute of Scientific and Technical Information of China (English)
2008-01-01
In this paper three perfect gas models with constant specific heat or with variable specific heat and one real gas model based on the gas property tables are respec- tively considered to implement into the three-dimensional CFD (computational fluid dynamics) analysis of a centrifugal refrigeration compressor stage. The results show that the gas models applied to the CFD code have significant influences on the performance of stage and the flow structures in the stage. Although the ther- modynamics operating condition of evolving fluid in the centrifugal refrigeration compressor has a significant deviation from the perfect gas, the perfect gas model with the modified value of gas constant and the variable specific heat offers a good prediction of stage performance. To predict some basic fluid flow parameters and flow structure accurately, the real gas effects should be considered and the rea- sonably accurate thermodynamic properties based on the analytical gas equation of state or numerical interpolation of gas tables should be applied to the CFD code.
Kratochwil, Nicole A; Meille, Christophe; Fowler, Stephen; Klammers, Florian; Ekiciler, Aynur; Molitor, Birgit; Simon, Sandrine; Walter, Isabelle; McGinnis, Claudia; Walther, Johanna; Leonard, Brian; Triyatni, Miriam; Javanbakht, Hassan; Funk, Christoph; Schuler, Franz; Lavé, Thierry; Parrott, Neil J
2017-03-01
Early prediction of human clearance is often challenging, in particular for the growing number of low-clearance compounds. Long-term in vitro models have been developed which enable sophisticated hepatic drug disposition studies and improved clearance predictions. Here, the cell line HepG2, iPSC-derived hepatocytes (iCell®), the hepatic stem cell line HepaRG™, and human hepatocyte co-cultures (HμREL™ and HepatoPac®) were compared to primary hepatocyte suspension cultures with respect to their key metabolic activities. Similar metabolic activities were found for the long-term models HepaRG™, HμREL™, and HepatoPac® and the short-term suspension cultures when averaged across all 11 enzyme markers, although differences were seen in the activities of CYP2D6 and non-CYP enzymes. For iCell® and HepG2, the metabolic activity was more than tenfold lower. The micropatterned HepatoPac® model was further evaluated with respect to clearance prediction. To assess the in vitro parameters, pharmacokinetic modeling was applied. The determination of intrinsic clearance by nonlinear mixed-effects modeling in a long-term model significantly increased the confidence in the parameter estimation and extended the sensitive range towards 3% of liver blood flow, i.e., >10-fold lower as compared to suspension cultures. For in vitro to in vivo extrapolation, the well-stirred model was used. The micropatterned model gave rise to clearance prediction in man within a twofold error for the majority of low-clearance compounds. Further research is needed to understand whether transporter activity and drug metabolism by non-CYP enzymes, such as UGTs, SULTs, AO, and FMO, is comparable to the in vivo situation in these long-term culture models.
Prediction models in complex terrain
DEFF Research Database (Denmark)
Marti, I.; Nielsen, Torben Skov; Madsen, Henrik
2001-01-01
The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...
Dong, Yong-Yi; Li, Gang; An, Dong-Sheng; Luo, Wei-Hong
2012-04-01
Dry matter allocation and translocation is the base of the formation of appearance quality of ornamental plants, and strongly affected by water supply. Taking cut lily cultivar 'Sorbonne' as test material, a culture experiment of different planting dates and water supply levels was conducted in a multi-span greenhouse in Nanjing from March 2009 to January 2010 to quantitatively analyze the seasonal changes of the dry matter allocation and translocation in 'Sorbonne' plants and the effects of substrate water potential on the dry matter allocation indices for different organs (flower, stem, leaf, bulb, and root), aimed to define the critical substrate water potential for the normal growth of the cultivar, and establish a simulation model for predicting the dry matter allocation in cut lily plants under effects of substrate water potential. The model established in this study gave a good prediction on the dry mass of plant organs, with the coefficient of determination and the relative root mean square error between the simulated and measured values of the cultivar' s flower dry mass, stem dry mass, leaf dry mass, bulb dry mass, and root dry mass being 0.96 and 19.2%, 0.95 and 12.4%, 0.86 and 19.4%, 0.95 and 12.2%, and 0.85 and 31.7%, respectively. The critical water potential for the water management of cut lily could be -15 kPa.
Directory of Open Access Journals (Sweden)
Achillopoulou Dimitra
2014-12-01
Full Text Available The study deals with the investigation of the effect of casting deficiencies- both experimentally and analytically on axial yield load or reinforced concrete columns. It includes 6 specimens of square section (150x150x500 mm of 24.37 MPa nominal concrete strength with 4 longitudinal steel bars of 8 mm (500 MPa nominal strength with confinement ratio ωc=0.15. Through casting procedure the necessary provisions defined by International Standards were not applied strictly in order to create construction deficiencies. These deficiencies are quantified geometrically without the use of expensive and expertise non-destructive methods and their effect on the axial load capacity of the concrete columns is calibrated trough a novel and simplified prediction model extracted by an experimental and analytical investigation that included 6 specimens. It is concluded that: a even with suitable repair, load reduction up to 22% is the outcome of the initial construction damage presence, b the lower dispersion is noted for the section damage index proposed, c extended damage alters the failure mode to brittle accompanied with longitudinal bars buckling, d the proposed model presents more than satisfying results to the load capacity prediction of repaired columns.
Hajibozorgi, M; Arjmand, N
2016-04-11
Range of motion (ROM) of the thoracic spine has implications in patient discrimination for diagnostic purposes and in biomechanical models for predictions of spinal loads. Few previous studies have reported quite different thoracic ROMs. Total (T1-T12), lower (T5-T12) and upper (T1-T5) thoracic, lumbar (T12-S1), pelvis, and entire trunk (T1) ROMs were measured using an inertial tracking device as asymptomatic subjects flexed forward from their neutral upright position to full forward flexion. Correlations between body height and the ROMs were conducted. An effect of measurement errors of the trunk flexion (T1) on the model-predicted spinal loads was investigated. Mean of peak voluntary total flexion of trunk (T1) was 118.4 ± 13.9°, of which 20.5 ± 6.5° was generated by flexion of the T1 to T12 (thoracic ROM), and the remaining by flexion of the T12 to S1 (lumbar ROM) (50.2 ± 7.0°) and pelvis (47.8 ± 6.9°). Lower thoracic ROM was significantly larger than upper thoracic ROM (14.8 ± 5.4° versus 5.8 ± 3.1°). There were non-significant weak correlations between body height and the ROMs. Contribution of the pelvis to generate the total trunk flexion increased from ~20% to 40% and that of the lumbar decreased from ~60% to 42% as subjects flexed forward from upright to maximal flexion while that of the thoracic spine remained almost constant (~16% to 20%) during the entire movement. Small uncertainties (±5°) in the measurement of trunk flexion angle resulted in considerable errors (~27%) in the model-predicted spinal loads only in activities involving small trunk flexion.
Spatial scale effects on model parameter estimation and predictive uncertainty in ungauged basins
CSIR Research Space (South Africa)
Hughes, DA
2013-06-01
Full Text Available The most appropriate scale to use for hydrological modelling depends on the structure of the chosen model, the purpose of the results and the resolution of the available data used to quantify parameter values and provide the climatic forcing data...
Effect of Nordic ciets on ECOSYS model predictions of ingestion doses
DEFF Research Database (Denmark)
Hansen, Hanne S.; Nielsen, Sven Poul; Andersson, Kasper Grann
2010-01-01
The ECOSYS model is used to estimate ingestion dose in the ARGOS and RODOS decision support systems for nuclear emergency management. It is recommended that nation-specific values for several parameters are used in the model. However, this is generally overlooked when the systems are used in prac...
Risk terrain modeling predicts child maltreatment.
Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye
2016-12-01
As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children.
Energy Technology Data Exchange (ETDEWEB)
Monte, Luigi [ENEA CR Casaccia, Via P. Anguillarese 301 00100 Rome (Italy)], E-mail: luigi.monte@enea.it
2009-06-15
The present work describes a model for predicting the population dynamics of the main components (resources and consumers) of terrestrial ecosystems exposed to ionising radiation. The ecosystem is modelled by the Lotka-Volterra equations with consumer competition. Linear dose-response relationships without threshold are assumed to relate the values of the model parameters to the dose rates. The model accounts for the migration of consumers from areas characterised by different levels of radionuclide contamination. The criteria to select the model parameter values are motivated by accounting for the results of the empirical studies of past decades. Examples of predictions for long-term chronic exposure are reported and discussed.
Monte, Luigi
2009-06-01
The present work describes a model for predicting the population dynamics of the main components (resources and consumers) of terrestrial ecosystems exposed to ionising radiation. The ecosystem is modelled by the Lotka-Volterra equations with consumer competition. Linear dose-response relationships without threshold are assumed to relate the values of the model parameters to the dose rates. The model accounts for the migration of consumers from areas characterised by different levels of radionuclide contamination. The criteria to select the model parameter values are motivated by accounting for the results of the empirical studies of past decades. Examples of predictions for long-term chronic exposure are reported and discussed.
Predictive models of forest dynamics.
Purves, Drew; Pacala, Stephen
2008-06-13
Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.
Energy Technology Data Exchange (ETDEWEB)
Doughty, Christine; Tsang, Chin-Fu; Doughty, Christine; Uchida, Masahiro
2007-11-07
A complex fracture model for fluid flow and tracer transport was previously developed that incorporates many of the important physical effects of a realistic fracture, including advection through a heterogeneous fracture plane, partitioning of flow into multiple subfractures in the third dimension, and diffusion and sorption into fracture-filling gouge, small altered rock matrix blocks within the fracture zone, and the unaltered semi-infinite rock matrix on both sides of the fracture zone (Tsang and Doughty, 2003). It is common, however, to represent the complex fracture by much simpler models consisting of a single fracture, with a uniform or heterogeneous transmissivity distribution over its plane and bounded on both sides by a homogeneous semi-infinite matrix. Simple-model properties are often inferred from the analysis of short-term (one to a few days) site characterization (SC) tracer-test data. The question addressed in this paper is: How reliable is the temporal upscaling of these simplified models? Are they adequate are for long-term calculations that cover thousands of years? In this study, a particle-tracking approach is used to calculate tracer-test breakthrough curves (BTCs) in a complex fracture model, incorporating all the features described above, for both a short-term SC tracer test and a 10,000-year calculation. The results are considered the 'real-world'. Next, two simple fracture models, one uniform and the other heterogeneous, are introduced. Properties for these simple models are taken either from laboratory data or found by calibration to the short-term SC tracer-test BTCs obtained with the complex fracture model. Then the simple models are used to simulate tracer transport at the long-term time scale. Results show that for the short-term SC tracer test, the BTCs calculated using simple models with laboratory-measured parameters differ significantly from the BTCs obtained with the complex fracture model. By adjusting model properties
Energy Technology Data Exchange (ETDEWEB)
Flentje, M.; Hensley, F.; Gademann, G.; Wannenmacher, M. (Univ. of Heidelberg (Germany)); Menke, M. (German Cancer Research Center, Heidelberg (Germany))
1993-09-01
A patient series was analyzed retrospectively as an example of whole organ kidney irradiation with an inhomogenous dose distribution to test the validity of biophysical models predicting normal tissue tolerance to radiotherapy. From 1969 to 1984, 142 patients with seminoma were irradiated to the paraaortic region using predominantly rotational techniques which led to variable but partly substantial exposure of the kidneys. Median follow up was 8.2 (2.1-21) years and actuarial 10-year survival (Kaplan-Meier estimate) 82.8%. For all patients 3-dimensional dose distributions were reconstructed and normal tissue complication probabilities for the kidneys were generated from the individual dose volume histograms. To this respect different published biophysical algorithms were introduced in a 3-dimensional-treatment planning system. In seven patients clinically manifest renal impairment was observed (interval 10-84 months). An excellent agreement between predicted and observed effects was seen for two volume-oriented models, whereas complications were overestimated by an algorithm based on critical element assumptions. Should these observations be confirmed and extended to different types of organs corresponding algorithms could easily be integrated into 3-dimensional-treatment planning programs and be used for comparing and judging different plans on a more biologically oriented basis.
Karabulut, Esra Mahsereci; Ibrikci, Turgay
2014-05-01
This study develops a logistic model tree based automation system based on for accurate recognition of types of vertebral column pathologies. Six biomechanical measures are used for this purpose: pelvic incidence, pelvic tilt, lumbar lordosis angle, sacral slope, pelvic radius and grade of spondylolisthesis. A two-phase classification model is employed in which the first step is preprocessing the data by use of Synthetic Minority Over-sampling Technique (SMOTE), and the second one is feeding the classifier Logistic Model Tree (LMT) with the preprocessed data. We have achieved an accuracy of 89.73 %, and 0.964 Area Under Curve (AUC) in computer based automatic detection of the pathology. This was validated via a 10-fold-cross-validation experiment conducted on clinical records of 310 patients. The study also presents a comparative analysis of the vertebral column data with the use of several machine learning algorithms.
Gordon M. Heisler; Richard H. Grant; David J. Nowak; Wei Gao; Daniel E. Crane; Jeffery T. Walton
2003-01-01
Evaluating the impact of ultraviolet-B radiation (UVB) on urban populations would be enhanced by improved predictions of the UVB radiation at the level of human activity. This paper reports the status of plans for incorporating a UVB prediction module into an existing Urban Forest Effects (UFORE) model. UFORE currently has modules to quantify urban forest structure,...
Schmutz, Joel A.
2009-01-01
Yellow-billed loons (Gavia adamsii) breed in low densities in northern tundra habitats in Alaska, Canada, and Russia. They migrate to coastal marine habitats at mid to high latitudes where they spend their winters. Harvest may occur throughout the annual cycle, but of particular concern are recent reports of harvest from the Bering Strait region, which lies between Alaska and Russia and is an area used by yellow-billed loons during migration. Annual harvest for this region was reported to be 317, 45, and 1,077 during 2004, 2005, and 2007, respectively. I developed a population model to assess the effect of this reported harvest on population size and trend of yellow-billed loons. Because of the uncertainty regarding actual harvest and definition of the breeding population(s) affected by this harvest, I considered 25 different scenarios. Predicted trends across these 25 scenarios ranged from stability to rapid decline (24 percent per year) with halving of the population in 3 years. Through an assessment of literature and unpublished satellite tracking data, I suggest that the most likely of these 25 scenarios is one where the migrant population subjected to harvest in the Bering Strait includes individuals from breeding populations in Alaska (Arctic coastal plain and the Kotzebue region) and eastern Russia, and for which the magnitude of harvest varies among years and emulates the annual variation of reported harvest during 2004-07 (317, 45, and 1,077 yellow-billed loons). This scenario, which assumes no movement of Canadian breeders through the Bering Strait, predicts a 4.6 percent rate of annual population decline, which would halve the populations in 15 years. Although these model outputs reflect the best available information, confidence in these predictions and applicable scenarios would be greatly enhanced by more information on harvest, rates of survival and reproduction, and migratory pathways.
2006-01-01
METHOD 2.1 Building the model Using existing task analyses of navy sonar systems (Matthews, Greenley and Webb, 1991) and with the assistance of...Critical Operator Tasks. DRDC Toronto Report # CR-2003-131 Matthews, M.L., Greenley , M. and Webb, R.D.G (1991). Presentation of Information from Towed
Spatial Economics Model Predicting Transport Volume
Directory of Open Access Journals (Sweden)
Lu Bo
2016-10-01
Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.
Bucklin, David N.; Watling, James I.; Speroterra, Carolina; Brandt, Laura A.; Mazzotti, Frank J.; Romañach, Stephanie S.
2013-01-01
High-resolution (downscaled) projections of future climate conditions are critical inputs to a wide variety of ecological and socioeconomic models and are created using numerous different approaches. Here, we conduct a sensitivity analysis of spatial predictions from climate envelope models for threatened and endangered vertebrates in the southeastern United States to determine whether two different downscaling approaches (with and without the use of a regional climate model) affect climate envelope model predictions when all other sources of variation are held constant. We found that prediction maps differed spatially between downscaling approaches and that the variation attributable to downscaling technique was comparable to variation between maps generated using different general circulation models (GCMs). Precipitation variables tended to show greater discrepancies between downscaling techniques than temperature variables, and for one GCM, there was evidence that more poorly resolved precipitation variables contributed relatively more to model uncertainty than more well-resolved variables. Our work suggests that ecological modelers requiring high-resolution climate projections should carefully consider the type of downscaling applied to the climate projections prior to their use in predictive ecological modeling. The uncertainty associated with alternative downscaling methods may rival that of other, more widely appreciated sources of variation, such as the general circulation model or emissions scenario with which future climate projections are created.
Modeling and Prediction Using Stochastic Differential Equations
DEFF Research Database (Denmark)
Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp
2016-01-01
Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...
Energy Technology Data Exchange (ETDEWEB)
Nam, Jin Hyun [School of Mechanical Engineering, Daegu University, Gyungsan (Korea, Republic of)
2017-04-15
The three-phase boundaries (TPBs) in the electrodes of solid oxide fuel cells (SOFCs) have different activity because of the distributed nature of the electrochemical reactions. The electrochemically active thickness (EAT) is a good measure to evaluate the extension of the active reaction zone into the electrode and the effective utilization of TPBs. In this study, an electrochemical reaction/charge conduction problem is formulated based on the Butler–Volmer reaction kinetics and then numerically solved to determine the EATs for the active electrode layers of SOFCs with various microstructural, dimensional, and property parameters. Thus, the EAT data and correlations presented in this study are expected to provide useful information for designing efficient electrodes of SOFCs.
Stroh, Mark; Talaty, Jennifer; Sandhu, Punam; McCrea, Jacqueline; Patnaik, Amita; Tolcher, Anthony; Palcza, John; Orford, Keith; Breidinger, Sheila; Narasimhan, Narayana; Panebianco, Deborah; Lush, Richard; Papadopoulos, Kyriakos P; Wagner, John A; Trucksis, Michele; Agrawal, Nancy
2014-11-01
Ridaforolimus, a unique non-prodrug analog of rapamycin, is a potent inhibitor of mTOR under development for cancer treatment. In vitro data suggest ridaforolimus is a reversible and time-dependent inhibitor of CYP3A. A model-based evaluation suggested an increase in midazolam area under the curve (AUC(0- ∞)) of between 1.13- and 1.25-fold in the presence of therapeutic concentrations of ridaforolimus. The pharmacokinetic interaction between multiple oral doses of ridaforolimus and a single oral dose of midazolam was evaluated in an open-label, fixed-sequence study, in which cancer patients received a single oral dose of 2 mg midazolam followed by 5 consecutive daily single oral doses of 40 mg ridaforolimus with a single dose of 2 mg midazolam with the fifth ridaforolimus dose. Changes in midazolam exposure were minimal [geometric mean ratios and 90% confidence intervals: 1.23 (1.07, 1.40) for AUC(0-∞) and 0.92 (0.82, 1.03) for maximum concentrations (C(max)), respectively]. Consistent with model predictions, ridaforolimus had no clinically important effect on midazolam pharmacokinetics and is not anticipated to be a perpetrator of drug-drug interactions (DDIs) when coadministered with CYP3A substrates. Model-based approaches can provide reasonable estimates of DDI liability, potentially obviating the need to conduct dedicated DDI studies especially in challenging populations like cancer patients.
Mohammad Safeeq; Guillaume S. Mauger; Gordon E. Grant; Ivan Arismendi; Alan F. Hamlet; Se-Yeun Lee
2014-01-01
Assessing uncertainties in hydrologic models can improve accuracy in predicting future streamflow. Here, simulated streamflows using the Variable Infiltration Capacity (VIC) model at coarse (1/16°) and fine (1/120°) spatial resolutions were evaluated against observed streamflows from 217 watersheds. In...
Chipps, S.R.; Einfalt, L.M.; Wahl, David H.
2000-01-01
We measured growth of age-0 tiger muskellunge as a function of ration size (25, 50, 75, and 100% C(max))and water temperature (7.5-25??C) and compared experimental results with those predicted from a bioenergetic model. Discrepancies between actual and predicted values varied appreciably with water temperature and growth rate. On average, model output overestimated winter consumption rates at 10 and 7.5??C by 113 to 328%, respectively, whereas model predictions in summer and autumn (20-25??C) were in better agreement with actual values (4 to 58%). We postulate that variation in model performance was related to seasonal changes in esocid metabolic rate, which were not accounted for in the bioenergetic model. Moreover, accuracy of model output varied with feeding and growth rate of tiger muskellunge. The model performed poorly for fish fed low rations compared with estimates based on fish fed ad libitum rations and was attributed, in part, to the influence of growth rate on the accuracy of bioenergetic predictions. Based on modeling simulations, we found that errors associated with bioenergetic parameters had more influence on model output when growth rate was low, which is consistent with our observations. In addition, reduced conversion efficiency at high ration levels may contribute to variable model performance, thereby implying that waste losses should be modeled as a function of ration size for esocids. Our findings support earlier field tests of the esocid bioenergetic model and indicate that food consumption is generally overestimated by the model, particularly in winter months and for fish exhibiting low feeding and growth rates.
Modelling Chemical Reasoning to Predict Reactions
Segler, Marwin H S
2016-01-01
The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180,000 randomly selected binary reactions. We show that our data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-) discovering novel transformations (even including transition-metal catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph, and because each single reaction prediction is typically ac...
Directory of Open Access Journals (Sweden)
Anna-Sofie Stensgaard
2016-03-01
Full Text Available Currently, two broad types of approach for predicting the impact of climate change on vector-borne diseases can be distinguished: i empirical-statistical (correlative approaches that use statistical models of relationships between vector and/or pathogen presence and environmental factors; and ii process-based (mechanistic approaches that seek to simulate detailed biological or epidemiological processes that explicitly describe system behavior. Both have advantages and disadvantages, but it is generally acknowledged that both approaches have value in assessing the response of species in general to climate change. Here, we combine a previously developed dynamic, agentbased model of the temperature-sensitive stages of the Schistosoma mansoni and intermediate host snail lifecycles, with a statistical model of snail habitat suitability for eastern Africa. Baseline model output compared to empirical prevalence data suggest that the combined model performs better than a temperature-driven model alone, and highlights the importance of including snail habitat suitability when modeling schistosomiasis risk. There was general agreement among models in predicting changes in risk, with 24-36% of the eastern Africa region predicted to experience an increase in risk of up-to 20% as a result of increasing temperatures over the next 50 years. Vice versa the models predicted a general decrease in risk in 30-37% of the study area. The snail habitat suitability models also suggest that anthropogenically altered habitat play a vital role for the current distribution of the intermediate snail host, and hence we stress the importance of accounting for land use changes in models of future changes in schistosomiasis risk.
Pompe, P.P.M.; Bilderbeek, J.
2005-01-01
Using large amounts of data from small and medium-sized industrial firms, this study examines two aspects of bankruptcy prediction: the influence of the year prior to failure selected for model building and the effects in a period of economic decline. The results show that especially models
Pompe, P.P.M.; Bilderbeek, J.
2005-01-01
Using large amounts of data from small and medium-sized industrial firms, this study examines two aspects of bankruptcy prediction: the influence of the year prior to failure selected for model building and the effects in a period of economic decline. The results show that especially models generate
Panayidou, Klea; Gsteiger, Sandro; Egger, Matthias; Kilcher, Gablu; Carreras, Máximo; Efthimiou, Orestis; Debray, Thomas P A; Trelle, Sven; Hummel, Noemi
2016-01-01
The performance of a drug in a clinical trial setting often does not reflect its effect in daily clinical practice. In this third of three reviews, we examine the applications that have been used in the literature to predict real-world effectiveness from randomized controlled trial efficacy data. We
Directory of Open Access Journals (Sweden)
Liliya eEuro
2015-02-01
Full Text Available The accuracy of mitochondrial protein synthesis is dependent on the coordinated action of nuclear-encoded mitochondrial aminoacyl-tRNA synthetases (mtARSs and the mitochondrial DNA-encoded tRNAs. The recent advances in whole-exome sequencing have revealed the importance of the mtARS proteins for mitochondrial pathophysiology since nearly every nuclear gene for mtARS (out of 19 is now recognized as a disease gene for mitochondrial disease. Typically, defects in each mtARS have been identified in one tissue-specific disease, most commonly affecting the brain, or in one syndrome. However, mutations in the AARS2 gene for mitochondrial alanyl-tRNA synthetase (mtAlaRS have been reported both in patients with infantile-onset cardiomyopathy and in patients with childhood to adulthood-onset leukoencephalopathy. We present here an investigation of the effects of the described mutations on the structure of the synthetase, in an effort to understand the tissue-specific outcomes of the different mutations.The mtAlaRS differs from the other mtARSs because in addition to the aminoacylation domain, it has a conserved editing domain for deacylating tRNAs that have been mischarged with incorrect amino acids. We show that the cardiomyopathy phenotype results from a single allele, causing an amino acid change p.R592W in the editing domain of AARS2, whereas the leukodystrophy mutations are located in other domains of the synthetase. Nevertheless, our structural analysis predicts that all mutations reduce the aminoacylation activity of the synthetase, because all mtAlaRS domains contribute to tRNA binding for aminoacylation. According to our model, the cardiomyopathy mutations severely compromise aminoacylation whereas partial activity is retained by the mutation combinations found in the leukodystrophy patients. These predictions provide a hypothesis for the molecular basis of the distinct tissue-specific phenotypic outcomes.
Attard, M; Jean, G; Forestier, L; Cherqui, S; van't Hoff, W; Broyer, M; Antignac, C; Town, M
1999-12-01
Infantile nephropathic cystinosis is a rare, autosomal recessive disease caused by a defect in the transport of cystine across the lysosomal membrane and characterized by early onset of renal proximal tubular dysfunction. Late-onset cystinosis, a rarer form of the disorder, is characterized by onset of symptoms between 12 and 15 years of age. We previously characterized the cystinosis gene, CTNS, and identified pathogenic mutations in patients with infantile nephropathic cystinosis, including a common, approximately 65 kb deletion which encompasses exons 1-10. Structure predictions suggested that the gene product, cystinosin, is a novel integral lysosomal membrane protein. We now examine the predicted effect of mutations on this model of cystinosin. In this study, we screened patients with infantile nephropathic cystinosis, those with late-onset cystinosis and patients whose phenotype does not fit the classical definitions. We found 23 different mutations in CTNS; 14 are novel mutations. Out of 25 patients with infantile nephropathic cystinosis, 12 have two severely truncating mutations, which is consistent with a loss of functional protein, and 13 have missense or in-frame deletions, which would result in disruption of transmembrane domains and loss of protein function. Mutations found in two late-onset patients affect functionally unimportant regions of cystinosin, which accounts for their milder phenotype. For three patients, the age of onset of cystinosis was <7 years but the course of the disease was milder than the infantile nephropathic form. This suggests that the missense mutations found in these individuals allow production of functional protein and may also indicate regions of cystinosin which are not functionally important.
Directory of Open Access Journals (Sweden)
T. Sigi eHale
2015-05-01
Full Text Available Background: We previously hypothesized that poor task-directed sensory information processing should be indexed by increased weighting of right hemisphere (RH biased attention and visuo-perceptual brain functions during task operations, and have demonstrated this phenotype in ADHD across multiple studies, using multiple methodologies. However, in our recent Distributed Effects Model of ADHD, we surmised that this phenotype is not ADHD specific, but rather more broadly reflective of any circumstance that disrupts the induction and maintenance of an emergent task-directed neural architecture. Under this view, increased weighting of RH biased attention and visuo-perceptual brain functions is expected to generally index neurocognitive sets that are not optimized for task-directed thought and action, and when durable expressed, liability for ADHD. Method: The current study tested this view by examining whether previously identified rightward parietal EEG asymmetry in ADHD was associated with common ADHD characteristics and comorbidities (i.e., ADHD risk factors. Results: Barring one exception (non-right handedness, we found that it was. Rightward parietal asymmetry was associated with carrying the DRD4-7R risk allele, being male, having mood disorder, and having anxiety disorder. However, differences in the specific expression of rightward parietal asymmetry were observed, which are discussed in relation to possible unique mechanisms underlying ADHD liability in different ADHD RFs. Conclusion: Rightward parietal asymmetry appears to be a durable feature of ADHD liability, as predicted by the Distributed Effects Perspective Model of ADHD. Moreover, variability in the expression of this phenotype may shed light on different sources of ADHD liability.
Kyongho Son; Christina Tague; Carolyn Hunsaker
2016-01-01
The effect of fine-scale topographic variability on model estimates of ecohydrologic responses to climate variability in Californiaâs Sierra Nevada watersheds has not been adequately quantified and may be important for supporting reliable climate-impact assessments. This study tested the effect of digital elevation model (DEM) resolution on model accuracy and estimates...
Galici, Ruggero; Boggs, Jamin D; Miller, Kirsten L; Bonaventure, Pascal; Atack, John R
2008-03-01
5-HT7 receptors have been linked to a number of psychiatric disorders including anxiety and depression. The localization of 5-HT7 receptors in the thalamus, a key sensory processing center, and the high affinity of many atypical antipsychotic compounds for these receptors have led to the speculation of the utility of 5-HT7 antagonists in schizophrenia. The goal of these studies was to examine the effects of pharmacologic blockade and genetic ablation of 5-HT7 receptors in animal models predictive of antipsychotic-like activity. We evaluated the effects of SB-269970, a selective 5-HT7 receptor antagonist, on amphetamine and ketamine-induced hyperactivity and prepulse inhibition (PPI) deficits. In addition, sensorimotor gating function and locomotor activity were evaluated in 5-HT7 knockout mice. Locomotor activity was measured for up to 180 min using an automated infrared photobeam system, and PPI was evaluated in startle chambers. SB-269970 (3, 10 and 30 mg/kg, intraperitoneally) significantly blocked amphetamine [3 mg/kg, subcutaneously (s.c.)] and ketamine (30 mg/kg, s.c.)-induced hyperactivity and reversed amphetamine (10 mg/kg, s.c.)-induced but not ketamine (30 mg/kg, s.c.)-induced PPI deficits, without changing spontaneous locomotor activity and startle amplitude. The largest dose of SB-269970 did not block the effects of amphetamine in 5-HT7 knockout mice. Collectively, these results indicate that blockade of 5-HT7 receptors partially modulates glutamatergic and dopaminergic function and could be clinically useful for the treatment of positive symptoms of schizophrenia.
Directory of Open Access Journals (Sweden)
Kyung Ah Koo
2015-04-01
Full Text Available Alpine, subalpine and boreal tree species, of low genetic diversity and adapted to low optimal temperatures, are vulnerable to the warming effects of global climate change. The accurate prediction of these species’ distributions in response to climate change is critical for effective planning and management. The goal of this research is to predict climate change effects on the distribution of red spruce (Picea rubens Sarg. in the Great Smoky Mountains National Park (GSMNP, eastern USA. Climate change is, however, conflated with other environmental factors, making its assessment a complex systems problem in which indirect effects are significant in causality. Predictions were made by linking a tree growth simulation model, red spruce growth model (ARIM.SIM, to a GIS spatial model, red spruce habitat model (ARIM.HAB. ARIM.SIM quantifies direct and indirect interactions between red spruce and its growth factors, revealing the latter to be dominant. ARIM.HAB spatially distributes the ARIM.SIM simulations under the assumption that greater growth reflects higher probabilities of presence. ARIM.HAB predicts the future habitat suitability of red spruce based on growth predictions of ARIM.SIM under climate change and three air pollution scenarios: 10% increase, no change and 10% decrease. Results show that suitable habitats shrink most when air pollution increases. Higher temperatures cause losses of most low-elevation habitats. Increased precipitation and air pollution produce acid rain, which causes loss of both low- and high-elevation habitats. The general prediction is that climate change will cause contraction of red spruce habitats at both lower and higher elevations in GSMNP, and the effects will be exacerbated by increased air pollution. These predictions provide valuable information for understanding potential impacts of global climate change on the spatiotemporal distribution of red spruce habitats in GSMNP.
Comparing model predictions for ecosystem-based management
DEFF Research Database (Denmark)
Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste
2016-01-01
Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...
Directory of Open Access Journals (Sweden)
Pathomwat Wongrattanakamon
2016-12-01
Full Text Available The data is obtained from exploring the modulatory activities of bioflavonoids on P-glycoprotein function by ligand-based approaches. Multivariate Linear-QSAR models for predicting the induced/inhibitory activities of the flavonoids were created. Molecular descriptors were initially used as independent variables and a dependent variable was expressed as pFAR. The variables were then used in MLR analysis by stepwise regression calculation to build the linear QSAR data. The entire dataset consisted of 23 bioflavonoids was used as a training set. Regarding the obtained MLR QSAR model, R of 0.963, R2=0.927, Radj2=0.900, SEE=0.197, F=33.849 and q2=0.927 were achieved. The true predictabilities of QSAR model were justified by evaluation with the external dataset (Table 4. The pFARs of representative flavonoids were predicted by MLR QSAR modelling. The data showed that internal and external validations may generate the same conclusion.
Modelling language evolution: Examples and predictions.
Gong, Tao; Shuai, Lan; Zhang, Menghan
2014-06-01
We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.
Modelling language evolution: Examples and predictions
Gong, Tao; Shuai, Lan; Zhang, Menghan
2014-06-01
We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.
PREDICT : model for prediction of survival in localized prostate cancer
Kerkmeijer, Linda G W; Monninkhof, Evelyn M.; van Oort, Inge M.; van der Poel, Henk G.; de Meerleer, Gert; van Vulpen, Marco
2016-01-01
Purpose: Current models for prediction of prostate cancer-specific survival do not incorporate all present-day interventions. In the present study, a pre-treatment prediction model for patients with localized prostate cancer was developed.Methods: From 1989 to 2008, 3383 patients were treated with I
Predictive Modeling of Cardiac Ischemia
Anderson, Gary T.
1996-01-01
The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.
ENSO Prediction using Vector Autoregressive Models
Chapman, D. R.; Cane, M. A.; Henderson, N.; Lee, D.; Chen, C.
2013-12-01
A recent comparison (Barnston et al, 2012 BAMS) shows the ENSO forecasting skill of dynamical models now exceeds that of statistical models, but the best statistical models are comparable to all but the very best dynamical models. In this comparison the leading statistical model is the one based on the Empirical Model Reduction (EMR) method. Here we report on experiments with multilevel Vector Autoregressive models using only sea surface temperatures (SSTs) as predictors. VAR(L) models generalizes Linear Inverse Models (LIM), which are a VAR(1) method, as well as multilevel univariate autoregressive models. Optimal forecast skill is achieved using 12 to 14 months of prior state information (i.e 12-14 levels), which allows SSTs alone to capture the effects of other variables such as heat content as well as seasonality. The use of multiple levels allows the model advancing one month at a time to perform at least as well for a 6 month forecast as a model constructed to explicitly forecast 6 months ahead. We infer that the multilevel model has fully captured the linear dynamics (cf. Penland and Magorian, 1993 J. Climate). Finally, while VAR(L) is equivalent to L-level EMR, we show in a 150 year cross validated assessment that we can increase forecast skill by improving on the EMR initialization procedure. The greatest benefit of this change is in allowing the prediction to make effective use of information over many more months.
Are we ready to predict late effects?
DEFF Research Database (Denmark)
Salz, Talya; Baxi, Shrujal S; Raghunathan, Nirupa;
2015-01-01
BACKGROUND: After completing treatment for cancer, survivors may experience late effects: consequences of treatment that persist or arise after a latent period. PURPOSE: To identify and describe all models that predict the risk of late effects and could be used in clinical practice. DATA SOURCES:...
Predicting and Modelling of Survival Data when Cox's Regression Model does not hold
DEFF Research Database (Denmark)
Scheike, Thomas H.; Zhang, Mei-Jie
2002-01-01
Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...
Abazari, Alireza; Thompson, Richard B; Elliott, Janet A W; McGann, Locksley E
2012-03-21
Knowledge of the spatial and temporal distribution of cryoprotective agent (CPA) is necessary for the cryopreservation of articular cartilage. Cartilage dehydration and shrinkage, as well as the change in extracellular osmolality, may have a significant impact on chondrocyte survival during and after CPA loading, freezing, and thawing, and during CPA unloading. In the literature, Fick's law of diffusion is commonly used to predict the spatial distribution and overall concentration of the CPA in the cartilage matrix, and the shrinkage and stress-strain in the cartilage matrix during CPA loading are neglected. In this study, we used a previously described biomechanical model to predict the spatial and temporal distributions of CPA during loading. We measured the intrinsic inhomogeneities in initial water and fixed charge densities in the cartilage using magnetic resonance imaging and introduced them into the model as initial conditions. We then compared the prediction results with the results obtained using uniform initial conditions. The simulation results in this study demonstrate the presence of a significant mechanical strain in the matrix of the cartilage, within all layers, during CPA loading. The osmotic response of the chondrocytes to the cartilage dehydration during CPA loading was also simulated. The results reveal that a transient shrinking occurs to different levels, and the chondrocytes experience a significant decrease in volume, particularly in the middle and deep zones of articular cartilage, during CPA loading.
Hedrick, Mark S; Moon, Il Joon; Woo, Jihwan; Won, Jong Ho
2016-01-01
Previous studies have shown that concurrent vowel identification improves with increasing temporal onset asynchrony of the vowels, even if the vowels have the same fundamental frequency. The current study investigated the possible underlying neural processing involved in concurrent vowel perception. The individual vowel stimuli from a previously published study were used as inputs for a phenomenological auditory-nerve (AN) model. Spectrotemporal representations of simulated neural excitation patterns were constructed (i.e., neurograms) and then matched quantitatively with the neurograms of the single vowels using the Neurogram Similarity Index Measure (NSIM). A novel computational decision model was used to predict concurrent vowel identification. To facilitate optimum matches between the model predictions and the behavioral human data, internal noise was added at either neurogram generation or neurogram matching using the NSIM procedure. The best fit to the behavioral data was achieved with a signal-to-noise ratio (SNR) of 8 dB for internal noise added at the neurogram but with a much smaller amount of internal noise (SNR of 60 dB) for internal noise added at the level of the NSIM computations. The results suggest that accurate modeling of concurrent vowel data from listeners with normal hearing may partly depend on internal noise and where internal noise is hypothesized to occur during the concurrent vowel identification process.
Genetic models of homosexuality: generating testable predictions.
Gavrilets, Sergey; Rice, William R
2006-12-22
Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.
Kaplanis, S.; Kaplani, E.
2012-01-01
This paper outlines and formulates a compact and effective simulation model, which predicts the performance of single and double glaze flat-plate collector. The model uses an elaborated iterative simulation algorithm and provides the collector top losses, the glass covers temperatures, the collector absorber temperature, the collector fluid outlet temperature, the system efficiency, and the thermal gain for any operational and environmental conditions. It is a numerical approach based on simu...
An Anisotropic Hardening Model for Springback Prediction
Zeng, Danielle; Xia, Z. Cedric
2005-08-01
As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.
Return Predictability, Model Uncertainty, and Robust Investment
DEFF Research Database (Denmark)
Lukas, Manuel
Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...
A predictive model for dimensional errors in fused deposition modeling
DEFF Research Database (Denmark)
Stolfi, A.
2015-01-01
This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...
Pieters, Sigrid; Saeys, Wouter; Van den Kerkhof, Tom; Goodarzi, Mohammad; Hellings, Mario; De Beer, Thomas; Heyden, Yvan Vander
2013-01-25
Owing to spectral variations from other sources than the component of interest, large investments in the NIR model development may be required to obtain satisfactory and robust prediction performance. To make the NIR model development for routine active pharmaceutical ingredient (API) prediction in tablets more cost-effective, alternative modelling strategies were proposed. They used a massive amount of prior spectral information on intra- and inter-batch variation and the pure component spectra to define a clutter, i.e., the detrimental spectral information. This was subsequently used for artificial data augmentation and/or orthogonal projections. The model performance improved statistically significantly, with a 34-40% reduction in RMSEP while needing fewer model latent variables, by applying the following procedure before PLS regression: (1) augmentation of the calibration spectra with the spectral shapes from the clutter, and (2) net analyte pre-processing (NAP). The improved prediction performance was not compromised when reducing the variability in the calibration set, making exhaustive calibration unnecessary. Strong water content variations in the tablets caused frequency shifts of the API absorption signals that could not be included in the clutter. Updating the model for this kind of variation demonstrated that the completeness of the clutter is critical for the performance of these models and that the model will only be more robust for spectral variation that is not co-linear with the one from the property of interest.
Predictive Model Assessment for Count Data
2007-09-05
critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002
Foundation Settlement Prediction Based on a Novel NGM Model
Directory of Open Access Journals (Sweden)
Peng-Yu Chen
2014-01-01
Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.
Institute of Scientific and Technical Information of China (English)
焦重庆; 李月月
2015-01-01
According to the reciprocity principle, we propose an efficient model to compute the shielding effectiveness of a rectangular cavity with apertures covered by conductive sheet against an external incident electromagnetic wave. This problem is converted into another problem of solving the electromagnetic field leakage from the cavity when the cavity is excited by an electric dipole placed within it. By the combination of the unperturbed cavity field and the transfer impedance of the sheet, the tangential electric field distribution on the outer surface of the sheet is obtained. Then, the field distribution is regarded as an equivalent surface magnetic current source responsible for the leakage field. The validation of this model is verified by a comparison with the circuital model and the full-wave simulations. This time-saving model can deal with arbitrary aperture shape, various wave propagation and polarization directions, and the near-field effect.
Directory of Open Access Journals (Sweden)
Katya L Masconi
Full Text Available Imputation techniques used to handle missing data are based on the principle of replacement. It is widely advocated that multiple imputation is superior to other imputation methods, however studies have suggested that simple methods for filling missing data can be just as accurate as complex methods. The objective of this study was to implement a number of simple and more complex imputation methods, and assess the effect of these techniques on the performance of undiagnosed diabetes risk prediction models during external validation.Data from the Cape Town Bellville-South cohort served as the basis for this study. Imputation methods and models were identified via recent systematic reviews. Models' discrimination was assessed and compared using C-statistic and non-parametric methods, before and after recalibration through simple intercept adjustment.The study sample consisted of 1256 individuals, of whom 173 were excluded due to previously diagnosed diabetes. Of the final 1083 individuals, 329 (30.4% had missing data. Family history had the highest proportion of missing data (25%. Imputation of the outcome, undiagnosed diabetes, was highest in stochastic regression imputation (163 individuals. Overall, deletion resulted in the lowest model performances while simple imputation yielded the highest C-statistic for the Cambridge Diabetes Risk model, Kuwaiti Risk model, Omani Diabetes Risk model and Rotterdam Predictive model. Multiple imputation only yielded the highest C-statistic for the Rotterdam Predictive model, which were matched by simpler imputation methods.Deletion was confirmed as a poor technique for handling missing data. However, despite the emphasized disadvantages of simpler imputation methods, this study showed that implementing these methods results in similar predictive utility for undiagnosed diabetes when compared to multiple imputation.
Distributional Analysis for Model Predictive Deferrable Load Control
Chen, Niangjun; Gan, Lingwen; Low, Steven H.; Wierman, Adam
2014-01-01
Deferrable load control is essential for handling the uncertainties associated with the increasing penetration of renewable generation. Model predictive control has emerged as an effective approach for deferrable load control, and has received considerable attention. In particular, previous work has analyzed the average-case performance of model predictive deferrable load control. However, to this point, distributional analysis of model predictive deferrable load control has been elusive. In ...
声学建模预测降噪效果%Prediction of Noise Reduction Effect with Acoustic Modeling
Institute of Scientific and Technical Information of China (English)
郝娇; 翟国庆
2013-01-01
某星级酒店配套冷却塔噪声对所在5A级风景区声环境影响较大，将冷却塔视为二个侧面进风口、二个顶部排风口共4个面声源，在确定4个面声源的相对源强（A计权声功率级）大小情况下，采用Cadna/A软件进行声学建模，通过与实测值比较，最终校验确定4个面声源绝对源强，科学预测冷却塔正常运行工况下，上述4个面声源对厂界及界外敏感建筑处的噪声贡献值。在此基础上，对照降噪目标，并结合景区声景观要求，通过Cadna/A预测降噪效果，工程实施后降噪效果实测值与预测值基本一致。%The noise of cooling tower influenced much bigger to the sound environment of 5 A level scenic spot. The modeling of acoustic was developed by Cadna/A to predict the noise contribution value of 2 air intakes and 2 air outlets which was regarded as 4 area sources when cooling towers were working. To define the absolute A-weighted sound power level according to the relative value by comparing the measured and predicted value. Based on it, to design and optimize noise reduction projects. After the treatment, the measured and predicted value showing no difference, which meet the required environment limit and soudscape.
Nonlinear chaotic model for predicting storm surges
Directory of Open Access Journals (Sweden)
M. Siek
2010-09-01
Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.
Nonlinear chaotic model for predicting storm surges
Siek, M.; Solomatine, D.P.
This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables.
Mattelaer, Olivier; Ruiz, Richard
2016-01-01
Hadronic decays of boosted resonances, e.g., top quark jets, at hadronic super colliders are frequent predictions in TeV-scale extensions of the Standard Model of Particle Physics. In such scenarios, accurate modeling of QCD radiation is necessary for trustworthy predictions. We present the automation of fully differential, next-to-leading-order (NLO) in QCD corrections with parton shower (PS) matching for an effective Left-Right Symmetric Model (LRSM) that features $W_R^\\pm, Z_R$ gauge bosons and heavy Majorana neutrinos $N$. Publicly available universal model files require remarkably fewer user inputs for predicting benchmark collider processes than leading order LRSM constructions. We present predictions for inclusive $W_R^\\pm, Z_R$ production at the $\\sqrt{s} = 13$ TeV Large Hadron Collider (LHC) and a hypothetical future 100 TeV Very Large Hadron Collider (VLHC), as well as inclusive $N$ production for a hypothetical Large Hadron Electron Collider (LHeC). As a case study, we investigate at NLO+PS accurac...
EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH
Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.
2014-01-01
The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain,...
Zhang, Dan; Chen, Anqiang; Zhao, Jixia; Lu, Chuanhao; Liu, Gangcai
2017-10-01
Rock decay is mainly the result of the combined effects of moisture content and temperature, but little is known about the quantitative relationship between these variables and the rate of rock decay. In this study we develop quantitative calculation models of rock decay rate under laboratory conditions and validate the efficiency of these models by comparing the predicted rock decay mass and that measured for rock exposed in the field. Rainfall and temperature data in the field were standardised to a dimensionless moisture content and temperature variables, respectively, and then the predicted rock decay mass was calculated by the models. The measured rock decay mass was determined by manual sieving. Based on our previously determined relationship between a single factor (moisture content or temperature) and the rate of rock decay in the laboratory, power function models are developed. Results show that the rock decay mass calculated by the model was comparable with field data, with averaged relative errors of 1.53%, 9.00% and 11.82% for the Tuodian group (J3t), Matoushan group (K2m) and Lufeng group (J1l), respectively, which are mainly due to inaccurate transformation of field rainfall into the rock moisture content and artificial disturbance when the samples were sieved in the field. Our results show that the developed models based on laboratory-derived rates can accurately predict the decay rates of mudstones exposed in the field.
How to Establish Clinical Prediction Models
Directory of Open Access Journals (Sweden)
Yong-ho Lee
2016-03-01
Full Text Available A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice.
Paig-Tran, E W Misty; Bizzarro, Joseph J; Strother, James A; Summers, Adam P
2011-05-15
We created physical models based on the morphology of ram suspension-feeding fishes to better understand the roles morphology and swimming speed play in particle retention, size selectivity and filtration efficiency during feeding events. We varied the buccal length, flow speed and architecture of the gills slits, including the number, size, orientation and pore size/permeability, in our models. Models were placed in a recirculating flow tank with slightly negatively buoyant plankton-like particles (~20-2000 μm) collected at the simulated esophagus and gill rakers to locate the highest density of particle accumulation. Particles were captured through sieve filtration, direct interception and inertial impaction. Changing the number of gill slits resulted in a change in the filtration mechanism of particles from a bimodal filter, with very small (≤ 50 μm) and very large (>1000 μm) particles collected, to a filter that captured medium-sized particles (101-1000 μm). The number of particles collected on the gill rakers increased with flow speed and skewed the size distribution towards smaller particles (51-500 μm). Small pore sizes (105 and 200 μm mesh size) had the highest filtration efficiencies, presumably because sieve filtration played a significant role. We used our model to make predictions about the filtering capacity and efficiency of neonatal whale sharks. These results suggest that the filtration mechanics of suspension feeding are closely linked to an animal's swimming speed and the structural design of the buccal cavity and gill slits.
Comparison of Prediction-Error-Modelling Criteria
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....
Directory of Open Access Journals (Sweden)
Arefa Jafarzadeh Kohneloo
2015-09-01
Full Text Available Background: Recent studies have shown that effective genes on survival time of cancer patients play an important role as a risk factor or preventive factor. Present study was designed to determine effective genes on survival time for diffuse large B-cell lymphoma patients and predict the survival time using these selected genes. Materials & Methods: Present study is a cohort study was conducted on 40 patients with diffuse large B-cell lymphoma. For these patients, 2042 gene expression was measured. In order to predict the survival time, the composition of the semi-parametric additive survival model with two gene selection methods elastic net and lasso were used. Two methods were evaluated by plotting area under the ROC curve over time and calculating the integral of this curve. Results: Based on our findings, the elastic net method identified 10 genes, and Lasso-Cox method identified 7 genes. GENE3325X increased the survival time (P=0.006, Whereas GENE3980X and GENE377X reduced the survival time (P=0.004. These three genes were selected as important genes in both methods. Conclusion: This study showed that the elastic net method outperformed the common Lasso method in terms of predictive power. Moreover, apply the additive model instead Cox regression and using microarray data is usable way for predict the survival time of patients.
Optimal feedback scheduling of model predictive controllers
Institute of Scientific and Technical Information of China (English)
Pingfang ZHOU; Jianying XIE; Xiaolong DENG
2006-01-01
Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance, thus obtaining the predictability in time. Optimal feedback scheduling (FS-CBS) of a set of MPC tasks is presented to maximize the global control performance subject to limited processor time. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The constraints in the FSCBS guarantee scheduler of the total task set and stability of each component. The FS-CBS is shown robust against the variation of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.
Directory of Open Access Journals (Sweden)
K. Lee
2002-01-01
Full Text Available This paper reports the application to vegetation canopies of a coherent model for the propagation of electromagnetic radiation through a stratified medium. The resulting multi-layer vegetation model is plausibly realistic in that it recognises the dielectric permittivity of the vegetation matter, the mixing of the dielectric permittivities for vegetation and air within the canopy and, in simplified terms, the overall vertical distribution of dielectric permittivity and temperature through the canopy. Any sharp changes in the dielectric profile of the canopy resulted in interference effects manifested as oscillations in the microwave brightness temperature as a function of canopy height or look angle. However, when Gaussian broadening of the top and bottom of the canopy (reflecting the natural variability between plants was included within the model, these oscillations were eliminated. The model parameters required to specify the dielectric profile within the canopy, particularly the parameters that quantify the dielectric mixing between vegetation and air in the canopy, are not usually available in typical field experiments. Thus, the feasibility of specifying these parameters using an advanced single-criterion, multiple-parameter optimisation technique was investigated by automatically minimizing the difference between the modelled and measured brightness temperatures. The results imply that the mixing parameters can be so determined but only if other parameters that specify vegetation dry matter and water content are measured independently. The new model was then applied to investigate the sensitivity of microwave emission to specific vegetation parameters. Keywords: passive microwave, soil moisture, vegetation, SMOS, retrieval
Cefalu, Matthew; Dominici, Francesca
2014-07-01
In environmental epidemiology, we are often faced with 2 challenges. First, an exposure prediction model is needed to estimate the exposure to an agent of interest, ideally at the individual level. Second, when estimating the health effect associated with the exposure, confounding adjustment is needed in the health-effects regression model. The current literature addresses these 2 challenges separately. That is, methods that account for measurement error in the predicted exposure often fail to acknowledge the possibility of confounding, whereas methods designed to control confounding often fail to acknowledge that the exposure has been predicted. In this article, we consider exposure prediction and confounding adjustment in a health-effects regression model simultaneously. Using theoretical arguments and simulation studies, we show that the bias of a health-effect estimate is influenced by the exposure prediction model, the type of confounding adjustment used in the health-effects regression model, and the relationship between these 2. Moreover, we argue that even with a health-effects regression model that properly adjusts for confounding, the use of a predicted exposure can bias the health-effect estimate unless all confounders included in the health-effects regression model are also included in the exposure prediction model. While these results of this article were motivated by studies of environmental contaminants, they apply more broadly to any context where an exposure needs to be predicted.
Predictive modelling of ferroelectric tunnel junctions
Velev, Julian P.; Burton, John D.; Zhuravlev, Mikhail Ye; Tsymbal, Evgeny Y.
2016-05-01
Ferroelectric tunnel junctions combine the phenomena of quantum-mechanical tunnelling and switchable spontaneous polarisation of a nanometre-thick ferroelectric film into novel device functionality. Switching the ferroelectric barrier polarisation direction produces a sizable change in resistance of the junction—a phenomenon known as the tunnelling electroresistance effect. From a fundamental perspective, ferroelectric tunnel junctions and their version with ferromagnetic electrodes, i.e., multiferroic tunnel junctions, are testbeds for studying the underlying mechanisms of tunnelling electroresistance as well as the interplay between electric and magnetic degrees of freedom and their effect on transport. From a practical perspective, ferroelectric tunnel junctions hold promise for disruptive device applications. In a very short time, they have traversed the path from basic model predictions to prototypes for novel non-volatile ferroelectric random access memories with non-destructive readout. This remarkable progress is to a large extent driven by a productive cycle of predictive modelling and innovative experimental effort. In this review article, we outline the development of the ferroelectric tunnel junction concept and the role of theoretical modelling in guiding experimental work. We discuss a wide range of physical phenomena that control the functional properties of ferroelectric tunnel junctions and summarise the state-of-the-art achievements in the field.
A Modified Model Predictive Control Scheme
Institute of Scientific and Technical Information of China (English)
Xiao-Bing Hu; Wen-Hua Chen
2005-01-01
In implementations of MPC (Model Predictive Control) schemes, two issues need to be addressed. One is how to enlarge the stability region as much as possible. The other is how to guarantee stability when a computational time limitation exists. In this paper, a modified MPC scheme for constrained linear systems is described. An offline LMI-based iteration process is introduced to expand the stability region. At the same time, a database of feasible control sequences is generated offline so that stability can still be guaranteed in the case of computational time limitations. Simulation results illustrate the effectiveness of this new approach.
Directory of Open Access Journals (Sweden)
Kimberly J Van Meter
Full Text Available Nutrient legacies in anthropogenic landscapes, accumulated over decades of fertilizer application, lead to time lags between implementation of conservation measures and improvements in water quality. Quantification of such time lags has remained difficult, however, due to an incomplete understanding of controls on nutrient depletion trajectories after changes in land-use or management practices. In this study, we have developed a parsimonious watershed model for quantifying catchment-scale time lags based on both soil nutrient accumulations (biogeochemical legacy and groundwater travel time distributions (hydrologic legacy. The model accurately predicted the time lags observed in an Iowa watershed that had undergone a 41% conversion of area from row crop to native prairie. We explored the time scales of change for stream nutrient concentrations as a function of both natural and anthropogenic controls, from topography to spatial patterns of land-use change. Our results demonstrate that the existence of biogeochemical nutrient legacies increases time lags beyond those due to hydrologic legacy alone. In addition, we show that the maximum concentration reduction benefits vary according to the spatial pattern of intervention, with preferential conversion of land parcels having the shortest catchment-scale travel times providing proportionally greater concentration reductions as well as faster response times. In contrast, a random pattern of conversion results in a 1:1 relationship between percent land conversion and percent concentration reduction, irrespective of denitrification rates within the landscape. Our modeling framework allows for the quantification of tradeoffs between costs associated with implementation of conservation measures and the time needed to see the desired concentration reductions, making it of great value to decision makers regarding optimal implementation of watershed conservation measures.
Van Meter, Kimberly J; Basu, Nandita B
2015-01-01
Nutrient legacies in anthropogenic landscapes, accumulated over decades of fertilizer application, lead to time lags between implementation of conservation measures and improvements in water quality. Quantification of such time lags has remained difficult, however, due to an incomplete understanding of controls on nutrient depletion trajectories after changes in land-use or management practices. In this study, we have developed a parsimonious watershed model for quantifying catchment-scale time lags based on both soil nutrient accumulations (biogeochemical legacy) and groundwater travel time distributions (hydrologic legacy). The model accurately predicted the time lags observed in an Iowa watershed that had undergone a 41% conversion of area from row crop to native prairie. We explored the time scales of change for stream nutrient concentrations as a function of both natural and anthropogenic controls, from topography to spatial patterns of land-use change. Our results demonstrate that the existence of biogeochemical nutrient legacies increases time lags beyond those due to hydrologic legacy alone. In addition, we show that the maximum concentration reduction benefits vary according to the spatial pattern of intervention, with preferential conversion of land parcels having the shortest catchment-scale travel times providing proportionally greater concentration reductions as well as faster response times. In contrast, a random pattern of conversion results in a 1:1 relationship between percent land conversion and percent concentration reduction, irrespective of denitrification rates within the landscape. Our modeling framework allows for the quantification of tradeoffs between costs associated with implementation of conservation measures and the time needed to see the desired concentration reductions, making it of great value to decision makers regarding optimal implementation of watershed conservation measures.
Béchet, Quentin; Shilton, Andy; Guieysse, Benoit
2013-12-01
The ability to model algal productivity under transient conditions of light intensity and temperature is critical for assessing the profitability and sustainability of full-scale algae cultivation outdoors. However, a review of over 40 modeling approaches reveals that most of the models hitherto described in the literature have not been validated under conditions relevant to outdoor cultivation. With respect to light intensity, we therefore categorized and assessed these models based on their theoretical ability to account for the light gradients and short light cycles experienced in well-mixed dense outdoor cultures. Type I models were defined as models predicting the rate of photosynthesis of the entire culture as a function of the incident or average light intensity reaching the culture. Type II models were defined as models computing productivity as the sum of local productivities within the cultivation broth (based on the light intensity locally experienced by individual cells) without consideration of short light cycles. Type III models were then defined as models considering the impacts of both light gradients and short light cycles. Whereas Type I models are easy to implement, they are theoretically not applicable to outdoor systems outside the range of experimental conditions used for their development. By contrast, Type III models offer significant refinement but the complexity of the inputs needed currently restricts their practical application. We therefore propose that Type II models currently offer the best compromise between accuracy and practicability for full scale engineering application. With respect to temperature, we defined as "coupled" and "uncoupled" models the approaches which account and do not account for the potential interdependence of light and temperature on the rate of photosynthesis, respectively. Due to the high number of coefficients of coupled models and the associated risk of overfitting, the recommended approach is uncoupled
Ground Motion Prediction Models for Caucasus Region
Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino
2016-04-01
Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.
DEFF Research Database (Denmark)
Berning, Torsten
2014-01-01
The micro-porous layer (MPL) in a proton exchange membrane fuel cell is frequently believed to constitute a barrier for the liquid water owing to its low hydraulic permeability compared to the porous substrate. When micro-channels are carved into the MPL on the side facing the catalyst layer...... conditions. This modeling study investigates the effect of such micro-channels on the predicted membrane hydration level for a predetermined set of operating conditions with a three-dimensional computational fluid dynamics model that utilizes the multi-fluid approach....
Childhood asthma prediction models: a systematic review.
Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup
2015-12-01
Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.
Podar, Dorina; Ramsey, Michael H
2005-07-15
An eight-fold underestimate of the potential Cd exposure to humans via ingestion of lettuce grown in moderately alkaline soil has been measured experimentally. Current models of Cd uptake by leafy vegetables, which are used in risk assessment (e.g. CLEA in UK) predict higher concentration factors in acid than in alkaline soils. Experimental evidence shows that Cd uptake, although it decreases with increasing pH from acid to neutral soils, increases again in alkaline soils, confirming recent finding from other workers. The concentration of Zn in the soil also significantly affects the uptake of Cd, although this is not included in the current prediction models either. The effect of Zn on the uptake of Cd by plants is greater in slightly alkaline soils (pH 7.7) than in slightly acidic or neutral soils. High concentrations of Zn in soil (1000 mg/kg), which are often associated with elevated Cd levels, further increase the Cd concentration factor to values 12 times higher than that predicted by the CLEA model. This is due in part to the effect of the high soil Zn on reducing the above-ground biomass of the plants.
Tao, Yang; Li, Yong; Zhou, Ruiyun; Chu, Dinh-Toi; Su, Lijuan; Han, Yongbin; Zhou, Jianzhong
2016-10-01
In the study, osmotically dehydrated cherry tomatoes were partially dried to water activity between 0.746 and 0.868, vacuum-packed and stored at 4-30 °C for 60 days. Adaptive neuro-fuzzy inference system (ANFIS) was utilized to predict the physicochemical and microbiological parameters of these partially dried cherry tomatoes during storage. Satisfactory accuracies were obtained when ANFIS was used to predict the lycopene and total phenolic contents, color and microbial contamination. The coefficients of determination for all the ANFIS models were higher than 0.86 and showed better performance for prediction compared with models developed by response surface methodology. Through ANFIS modeling, the effects of storage conditions on the properties of partially dried cherry tomatoes were visualized. Generally, contents of lycopene and total phenolics decreased with the increase in water activity, temperature and storage time, while aerobic plate count and number of yeasts and molds increased at high water activities and temperatures. Overall, ANFIS approach can be used as an effective tool to study the quality decrease and microbial pollution of partially dried cherry tomatoes during storage, as well as identify the suitable preservation conditions.
Directory of Open Access Journals (Sweden)
Woochul Nam
Full Text Available Kinesins are molecular motors which walk along microtubules by moving their heads to different binding sites. The motion of kinesin is realized by a conformational change in the structure of the kinesin molecule and by a diffusion of one of its two heads. In this study, a novel model is developed to account for the 2D diffusion of kinesin heads to several neighboring binding sites (near the surface of microtubules. To determine the direction of the next step of a kinesin molecule, this model considers the extension in the neck linkers of kinesin and the dynamic behavior of the coiled-coil structure of the kinesin neck. Also, the mechanical interference between kinesins and obstacles anchored on the microtubules is characterized. The model predicts that both the kinesin velocity and run length (i.e., the walking distance before detaching from the microtubule are reduced by static obstacles. The run length is decreased more significantly by static obstacles than the velocity. Moreover, our model is able to predict the motion of kinesin when other (several motors also move along the same microtubule. Furthermore, it suggests that the effect of mechanical interaction/interference between motors is much weaker than the effect of static obstacles. Our newly developed model can be used to address unanswered questions regarding degraded transport caused by the presence of excessive tau proteins on microtubules.
Norwood, Warren P; Borgmann, Uwe; Dixon, D George
2013-07-01
Chronic toxicity tests of mixtures of 9 metals and 1 metalloid (As, Cd, Co, Cr, Cu, Mn, Ni, Pb, Tl, and Zn) at equitoxic concentrations over an increasing concentration range were conducted with the epibenthic, freshwater amphipod Hyalella azteca. The authors conducted 28-d, water-only tests. The bioaccumulation trends changed for 8 of the elements in exposures to mixtures of the metals compared with individual metal exposures. The bioaccumulation of Co and Tl were affected the most. These changes may be due to interactions between all the metals as well as interactions with waterborne ligands. A metal effects addition model (MEAM) is proposed as a more accurate method to assess the impact of mixtures of metals and to predict chronic mortality. The MEAM uses background-corrected body concentration to predict toxicity. This is important because the chemical characteristics of different waters can greatly alter the bioavailability and bioaccumulation of metals, and interactions among metals for binding at the site of action within the organism can affect body concentration. The MEAM accurately predicted toxicity in exposures to mixtures of metals, and predicted results were within a factor of 1.1 of the observed data, using 24-h depurated body concentrations. The traditional concentration addition model overestimated toxicity by a factor of 2.7.
Sarkar, S; Sarkar, Sukhendusekhar
2005-01-01
Shell model studies have been done for very neutron - rich nuclei in the range Z=50-55 and N=82-87. Good agreement of the theoretical level spectra with the experimental one for N=82, 83 I and Te nuclei is shown. Then the results for three very neutron-rich nuclei 137Sn and 136-137Sb have been presented. The present calculation favour a 2- ground state for 136Sb instead of 1- identified through beta decay.Interesting observation about the E2 effective charges for this region has been discussed.
Model predictive control classical, robust and stochastic
Kouvaritakis, Basil
2016-01-01
For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...
Directory of Open Access Journals (Sweden)
Erol Muzır
2010-09-01
Full Text Available This paper is prepared to test the common opinion that the multifactor asset pricing models produce superior predictions as compared to the single factor models and to evaluate the performance of Arbitrage Pricing Theory (APT and Capital Asset Pricing Model (CAPM. For this purpose, the monthly return data from January 1996 and December 2004 of the stocks of 45 firms listed at Istanbul Stock Exchange were used. Our factor analysis results show that 68,3 % of the return variation can be explained by five factors. Although the APT model has generated a low coefficient of determination, 28,3 %, it proves to be more competent in explaining stock return changes when compared to CAPM which has an inferior explanation power, 5,4 %. Furthermore, we have observed that APT is more robust also in capturing the effects of any economic crisis on return variations.
Directory of Open Access Journals (Sweden)
Wołowicz Marcin
2015-09-01
Full Text Available The paper presents dynamic model of hot water storage tank. The literature review has been made. Analysis of effects of nodalization on the prediction error of generalized finite element method (GFEM is provided. The model takes into account eleven various parameters, such as: flue gases volumetric flow rate to the spiral, inlet water temperature, outlet water flow rate, etc. Boiler is also described by sizing parameters, nozzle parameters and heat loss including ambient temperature. The model has been validated on existing data. Adequate laboratory experiments were provided. The comparison between 1-, 5-, 10- and 50-zone boiler is presented. Comparison between experiment and simulations for different zone numbers of the boiler model is presented on the plots. The reason of differences between experiment and simulation is explained.
Waterman, R C; Caton, J S; Löest, C A; Petersen, M K; Roberts, A J
2014-07-01
Interannual variation of forage quantity and quality driven by precipitation events influence beef livestock production systems within the Southern and Northern Plains and Pacific West, which combined represent 60% (approximately 17.5 million) of the total beef cows in the United States. The beef cattle requirements published by the NRC are an important tool and excellent resource for both professionals and producers to use when implementing feeding practices and nutritional programs within the various production systems. The objectives of this paper include evaluation of the 1996 Beef NRC model in terms of effectiveness in predicting extensive range beef cow performance within arid and semiarid environments using available data sets, identifying model inefficiencies that could be refined to improve the precision of predicting protein supply and demand for range beef cows, and last, providing recommendations for future areas of research. An important addition to the current Beef NRC model would be to allow users to provide region-specific forage characteristics and the ability to describe supplement composition, amount, and delivery frequency. Beef NRC models would then need to be modified to account for the N recycling that occurs throughout a supplementation interval and the impact that this would have on microbial efficiency and microbial protein supply. The Beef NRC should also consider the role of ruminal and postruminal supply and demand of specific limiting AA. Additional considerations should include the partitioning effects of nitrogenous compounds under different physiological production stages (e.g., lactation, pregnancy, and periods of BW loss). The intent of information provided is to aid revision of the Beef NRC by providing supporting material for changes and identifying gaps in existing scientific literature where future research is needed to enhance the predictive precision and application of the Beef NRC models.
Directory of Open Access Journals (Sweden)
Vincent Frappier
2014-04-01
Full Text Available Normal mode analysis (NMA methods are widely used to study dynamic aspects of protein structures. Two critical components of NMA methods are coarse-graining in the level of simplification used to represent protein structures and the choice of potential energy functional form. There is a trade-off between speed and accuracy in different choices. In one extreme one finds accurate but slow molecular-dynamics based methods with all-atom representations and detailed atom potentials. On the other extreme, fast elastic network model (ENM methods with Cα-only representations and simplified potentials that based on geometry alone, thus oblivious to protein sequence. Here we present ENCoM, an Elastic Network Contact Model that employs a potential energy function that includes a pairwise atom-type non-bonded interaction term and thus makes it possible to consider the effect of the specific nature of amino-acids on dynamics within the context of NMA. ENCoM is as fast as existing ENM methods and outperforms such methods in the generation of conformational ensembles. Here we introduce a new application for NMA methods with the use of ENCoM in the prediction of the effect of mutations on protein stability. While existing methods are based on machine learning or enthalpic considerations, the use of ENCoM, based on vibrational normal modes, is based on entropic considerations. This represents a novel area of application for NMA methods and a novel approach for the prediction of the effect of mutations. We compare ENCoM to a large number of methods in terms of accuracy and self-consistency. We show that the accuracy of ENCoM is comparable to that of the best existing methods. We show that existing methods are biased towards the prediction of destabilizing mutations and that ENCoM is less biased at predicting stabilizing mutations.
Energy based prediction models for building acoustics
DEFF Research Database (Denmark)
Brunskog, Jonas
2012-01-01
In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...
Two Predictions of a Compound Cue Model of Priming
Walenski, Matthew
2003-01-01
This paper examines two predictions of the compound cue model of priming (Ratcliff and McKoon, 1988). While this model has been used to provide an account of a wide range of priming effects, it may not actually predict priming in these or other circumstances. In order to predict priming effects, the compound cue model relies on an assumption that all items have the same number of associates. This assumption may be true in only a restricted number of cases. This paper demonstrates that when th...
Morin, Léo; Leblond, Jean-Baptiste; Tvergaard, Viggo
2016-09-01
An extension of Gurson's famous model (Gurson, 1977) of porous plastic solids, incorporating void shape effects, has recently been proposed by Madou and Leblond (Madou and Leblond, 2012a, 2012b, 2013; Madou et al., 2013). In this extension the voids are no longer modelled as spherical but ellipsoidal with three different axes, and changes of the magnitude and orientation of these axes are accounted for. The aim of this paper is to show that the new model is able to predict softening due essentially to such changes, in the absence of significant void growth. This is done in two steps. First, a numerical implementation of the model is proposed and incorporated into the SYSTUS® and ABAQUS® finite element programmes (through some freely available UMAT (Leblond, 2015) in the second case). Second, the implementation in SYSTUS® is used to simulate previous "numerical experiments" of Tvergaard and coworkers (Tvergaard, 2008, 2009, 2012, 2015a; Dahl et al., 2012; Nielsen et al., 2012) involving the shear loading of elementary porous cells, where softening due to changes of the void shape and orientation was very apparent. It is found that with a simple, heuristic modelling of the phenomenon of mesoscopic strain localization, the model is indeed able to reproduce the results of these numerical experiments, in contrast to Gurson's model disregarding void shape effects.
Massive Predictive Modeling using Oracle R Enterprise
CERN. Geneva
2014-01-01
R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...
Liver Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Colorectal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Cervical Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Prostate Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Pancreatic Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Colorectal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Bladder Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Esophageal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Lung Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Breast Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Ovarian Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Testicular Cancer Risk Prediction Models
Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Soniat, Thomas M.; Conzelmann, Craig P.; Byrd, Jason D.; Roszell, Dustin P.; Bridevaux, Joshua L.; Suir, Kevin J.; Colley, Susan B.
2013-01-01
In an attempt to decelerate the rate of coastal erosion and wetland loss, and protect human communities, the state of Louisiana developed its Comprehensive Master Plan for a Sustainable Coast. The master plan proposes a combination of restoration efforts including shoreline protection, marsh creation, sediment diversions, and ridge, barrier island, and hydrological restoration. Coastal restoration projects, particularly the large-scale diversions of fresh water from the Mississippi River, needed to supply sediment to an eroding coast potentially impact oyster populations and oyster habitat. An oyster habitat suitability index model is presented that evaluates the effects of a proposed sediment and freshwater diversion into Lower Breton Sound. Voluminous freshwater, needed to suspend and broadly distribute river sediment, will push optimal salinities for oysters seaward and beyond many of the existing reefs. Implementation and operation of the Lower Breton Sound diversion structure as proposed would render about 6,173 ha of hard bottom immediately east of the Mississippi River unsuitable for the sustained cultivation of oysters. If historical harvests are to be maintained in this region, a massive and unprecedented effort to relocate private leases and restore oyster bottoms would be required. Habitat suitability index model results indicate that the appropriate location for such efforts are to the east and north of the Mississippi River Gulf Outlet.
Predictive Model of Radiative Neutrino Masses
Babu, K S
2013-01-01
We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z_4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: The hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with \\delta_{CP} = \\pi; and the effective mass in neutrinoless double beta decay lies in a narrow range, m_{\\beta \\beta} = (17.6 - 18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tan\\beta, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The non-standard neutral Higgs bosons, if t...
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
Rutten, M J M; Bovenhuis, H; van Arendonk, J A M
2010-10-01
Fourier transform infrared spectroscopy is a suitable method to determine bovine milk fat composition. However, the determination of fat composition by gas chromatography, required for calibration of the infrared prediction model, is expensive and labor intensive. It has recently been shown that the number of calibration samples is strongly related to the model's validation r(2) (i.e., accuracy of prediction). However, the effect of the number of calibration samples used, and therefore validation r(2), on the estimated genetic parameters of data predicted using the model needs to be established. To this end, 235 calibration data subsets of different sizes were sampled: n=100, n=250, n=500, and n=1,000 calibration samples. Subsequently, these data subsets were used to calibrate fat composition prediction models for 2 specific fatty acids: C16:0 and C18u (where u=unsaturated). Next, genetic parameters were estimated on predicted fat composition data for these fatty acids. Strong relationships between the number of calibration samples and validation r(2), as well as strong genetic correlations were found. However, the use of n=100 calibration samples resulted in a broad range of validation r(2) values and genetic correlations. Subsequent increases of the number of calibration samples resulted in narrowing patterns for validation r(2) as well as genetic correlations. The use of n=1,000 calibration samples resulted in estimated genetic correlations varying within a range of 0.10 around the average, which seems acceptable. Genetic analyses for the human health-related fatty acids C14:0, C16:0, and C18u, and the ratio of saturated fatty acids to unsaturated fatty acids showed that replacing observations on fat composition determined by gas chromatography by predictions based on infrared spectra reduced the potential genetic gain to 98, 86, 96, and 99% for the 4 fatty acid traits, respectively, in dairy breeding schemes where progeny testing is practiced. We conclude that
Research on Drag Torque Prediction Model for the Wet Clutches
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Considering the surface tension effect and centrifugal effect, a mathematical model based on Reynolds equation for predicting the drag torque of disengage wet clutches is presented. The model indicates that the equivalent radius is a function of clutch speed and flow rate. The drag torque achieves its peak at a critical speed. Above this speed, drag torque drops due to the shrinking of the oil film. The model also points out that viscosity and flow rate effects on drag torque. Experimental results indicate that the model is reasonable and it performs well for predicting the drag torque peak.
Posterior Predictive Model Checking in Bayesian Networks
Crawford, Aaron
2014-01-01
This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…
Directory of Open Access Journals (Sweden)
Francisco Marco-Rius
Full Text Available Fish growth is commonly used as a proxy for fitness but this is only valid if individual growth variation can be interpreted in relation to conspecifics' performance. Unfortunately, assessing individual variation in growth rates is problematic under natural conditions because subjects typically need to be marked, repeated measurements of body size are difficult to obtain in the field, and recaptures may be limited to a few time events which will generally vary among individuals. The analysis of consecutive growth rings (circuli found on scales and other hard structures offers an alternative to mark and recapture for examining individual growth variation in fish and other aquatic vertebrates where growth rings can be visualized, but accounting for autocorrelations and seasonal growth stanzas has proved challenging. Here we show how mixed-effects modelling of scale growth increments (inter-circuli spacing can be used to reconstruct the growth trajectories of sea trout (Salmo trutta and correctly classify 89% of individuals into early or late seaward migrants (smolts. Early migrants grew faster than late migrants during their first year of life in freshwater in two natural populations, suggesting that migration into the sea was triggered by ontogenetic (intrinsic drivers, rather than by competition with conspecifics. Our study highlights the profound effects that early growth can have on age at migration of a paradigmatic fish migrant and illustrates how the analysis of inter-circuli spacing can be used to reconstruct the detailed growth of individuals when these cannot be marked or are only caught once.
A Course in... Model Predictive Control.
Arkun, Yaman; And Others
1988-01-01
Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)
Equivalency and unbiasedness of grey prediction models
Institute of Scientific and Technical Information of China (English)
Bo Zeng; Chuan Li; Guo Chen; Xianjun Long
2015-01-01
In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction mo-dels, the equivalence and unbiasedness of grey prediction mo-dels are analyzed and verified. The results show that al the grey prediction models that are strictly derived from x(0)(k) +az(1)(k) = b have the identical model structure and simulation precision. Moreover, the unbiased simulation for the homoge-neous exponential sequence can be accomplished. However, the models derived from dx(1)/dt+ax(1) =b are only close to those derived from x(0)(k)+az(1)(k)=b provided that|a|has to satisfy|a| < 0.1; neither could the unbiased simulation for the homoge-neous exponential sequence be achieved. The above conclusions are proved and verified through some theorems and examples.
Predictability of extreme values in geophysical models
Directory of Open Access Journals (Sweden)
A. E. Sterk
2012-09-01
Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.
Hybrid modeling and prediction of dynamical systems
Lloyd, Alun L.; Flores, Kevin B.
2017-01-01
Scientific analysis often relies on the ability to make accurate predictions of a system’s dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model’s equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data. PMID:28692642
Sparve, Erik; Quartino, Angelica L; Lüttgen, Maria; Tunblad, Karin; Gårdlund, Anna Teiling; Fälting, Johanna; Alexander, Robert; Kågström, Jens; Sjödin, Linnea; Bulgak, Alexander; Al-Saffar, Ahmad; Bridgland-Taylor, Matthew; Pollard, Chris; Swedberg, Michael D B; Vik, Torbjörn; Paulsson, Björn
2014-08-01
Corrected QT interval (QTc) prolongation in humans is usually predictable based on results from preclinical findings. This study confirms the signal from preclinical cardiac repolarization models (human ether-a-go-go-related gene, guinea pig monophasic action potential, and dog telemetry) on the clinical effects on the QTc interval. A thorough QT/QTc study is generally required for bioavailable pharmaceutical compounds to determine whether or not a drug shows a QTc effect above a threshold of regulatory interest. However, as demonstrated in this AZD3839 [(S)-1-(2-(difluoromethyl)pyridin-4-yl)-4-fluoro-1-(3-(pyrimidin-5-yl)phenyl)-1H-isoindol-3-amine hemifumarate] single-ascending-dose (SAD) study, high-resolution digital electrocardiogram data, in combination with adequate efficacy biomarker and pharmacokinetic data and nonlinear mixed effects modeling, can provide the basis to safely explore the margins to allow for robust modeling of clinical effect versus the electrophysiological risk marker. We also conclude that a carefully conducted SAD study may provide reliable data for effective early strategic decision making ahead of the thorough QT/QTc study.
Property predictions using microstructural modeling
Energy Technology Data Exchange (ETDEWEB)
Wang, K.G. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)]. E-mail: wangk2@rpi.edu; Guo, Z. [Sente Software Ltd., Surrey Technology Centre, 40 Occam Road, Guildford GU2 7YG (United Kingdom); Sha, W. [Metals Research Group, School of Civil Engineering, Architecture and Planning, The Queen' s University of Belfast, Belfast BT7 1NN (United Kingdom); Glicksman, M.E. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States); Rajan, K. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)
2005-07-15
Precipitation hardening in an Fe-12Ni-6Mn maraging steel during overaging is quantified. First, applying our recent kinetic model of coarsening [Phys. Rev. E, 69 (2004) 061507], and incorporating the Ashby-Orowan relationship, we link quantifiable aspects of the microstructures of these steels to their mechanical properties, including especially the hardness. Specifically, hardness measurements allow calculation of the precipitate size as a function of time and temperature through the Ashby-Orowan relationship. Second, calculated precipitate sizes and thermodynamic data determined with Thermo-Calc[copyright] are used with our recent kinetic coarsening model to extract diffusion coefficients during overaging from hardness measurements. Finally, employing more accurate diffusion parameters, we determined the hardness of these alloys independently from theory, and found agreement with experimental hardness data. Diffusion coefficients determined during overaging of these steels are notably higher than those found during the aging - an observation suggesting that precipitate growth during aging and precipitate coarsening during overaging are not controlled by the same diffusion mechanism.
Two criteria for evaluating risk prediction models.
Pfeiffer, R M; Gail, M H
2011-09-01
We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.
DEFF Research Database (Denmark)
Olsen, Christina Kurre; Brennum, Lise Tøttrup; Kreilgaard, Mads
2008-01-01
In the rat, selective suppression of conditioned avoidance response has been widely reported as a test with high predictive validity for antipsychotic efficacy. Recent studies have shown that the relationship between dopamine D2 receptor occupancy and the suppression of conditioned avoidance resp...
Application of Nonlinear Predictive Control Based on RBF Network Predictive Model in MCFC Plant
Institute of Scientific and Technical Information of China (English)
CHEN Yue-hua; CAO Guang-yi; ZHU Xin-jian
2007-01-01
This paper described a nonlinear model predictive controller for regulating a molten carbonate fuel cell (MCFC). A detailed mechanism model of output voltage of a MCFC was presented at first. However, this model was too complicated to be used in a control system. Consequently, an off line radial basis function (RBF) network was introduced to build a nonlinear predictive model. And then, the optimal control sequences were obtained by applying golden mean method. The models and controller have been realized in the MATLAB environment. Simulation results indicate the proposed algorithm exhibits satisfying control effect even when the current densities vary largely.
Precision Plate Plan View Pattern Predictive Model
Institute of Scientific and Technical Information of China (English)
ZHAO Yang; YANG Quan; HE An-rui; WANG Xiao-chen; ZHANG Yun
2011-01-01
According to the rolling features of plate mill, a 3D elastic-plastic FEM （finite element model） based on full restart method of ANSYS/LS-DYNA was established to study the inhomogeneous plastic deformation of multipass plate rolling. By analyzing the simulation results, the difference of head and tail ends predictive models was found and modified. According to the numerical simulation results of 120 different kinds of conditions, precision plate plan view pattern predictive model was established. Based on these models, the sizing MAS （mizushima automatic plan view pattern control system） method was designed and used on a 2 800 mm plate mill. Comparing the rolled plates with and without PVPP （plan view pattern predictive） model, the reduced width deviation indicates that the olate !olan view Dattern predictive model is preeise.
MULTI MODEL DATA MINING APPROACH FOR HEART FAILURE PREDICTION
Directory of Open Access Journals (Sweden)
Priyanka H U
2016-09-01
Full Text Available Developing predictive modelling solutions for risk estimation is extremely challenging in health-care informatics. Risk estimation involves integration of heterogeneous clinical sources having different representation from different health-care provider making the task increasingly complex. Such sources are typically voluminous, diverse, and significantly change over the time. Therefore, distributed and parallel computing tools collectively termed big data tools are in need which can synthesize and assist the physician to make right clinical decisions. In this work we propose multi-model predictive architecture, a novel approach for combining the predictive ability of multiple models for better prediction accuracy. We demonstrate the effectiveness and efficiency of the proposed work on data from Framingham Heart study. Results show that the proposed multi-model predictive architecture is able to provide better accuracy than best model approach. By modelling the error of predictive models we are able to choose sub set of models which yields accurate results. More information was modelled into system by multi-level mining which has resulted in enhanced predictive accuracy.
Model-based uncertainty in species range prediction
DEFF Research Database (Denmark)
Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel;
2006-01-01
Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...... day (using the area under the receiver operating characteristic curve (AUC) and kappa statistics) and by assessing consistency in predictions of range size changes under future climate (using cluster analysis). Results Our analyses show significant differences between predictions from different models......, with predicted changes in range size by 2030 differing in both magnitude and direction (e.g. from 92% loss to 322% gain). We explain differences with reference to two characteristics of the modelling techniques: data input requirements (presence/absence vs. presence-only approaches) and assumptions made by each...
Flavor effects on leptogenesis predictions
Blanchet, S; Bari, Pasquale Di; Blanchet, Steve
2006-01-01
Flavor effects in leptogenesis reduce the region of the see-saw parameter space where the final predictions do not depend on the initial conditions, the strong wash-out regime. In this case we show that the lowest bounds holding on the lightest right-handed (RH) neutrino mass and on the reheating temperature for hierarchical heavy neutrinos, do not get relaxed compared to the usual ones in the one-flavor approximation, M_1 (T_reh) \\gtrsim 3 (1.5) x 10^9 GeV. Flavor effects can however relax down to these minimal values the lower bounds holding for fixed large values of the decay parameter K_1. We discuss a relevant definite example showing that, when the known information on the neutrino mixing matrix is employed, the lower bounds for K_1 \\gg 10, are relaxed by a factor 2-3 for light hierarchical neutrinos, without any dependence on \\theta_13 and on possible phases. On the other hand, going beyond the limit of light hierarchical neutrinos and taking into account Majorana phases, the lower bounds can be relaxe...
NBC Hazard Prediction Model Capability Analysis
1999-09-01
Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented
Darwish, Mona; Bond, Mary; Ezzet, Farkad
2012-09-01
Armodafinil, the longer lasting R-isomer of racemic modafinil, improves wakefulness in patients with excessive sleepiness associated with shift work disorder (SWD). Pharmacokinetic studies suggest that armodafinil achieves higher plasma concentrations than modafinil late in a dose interval following equal oral doses. Pooled Multiple Sleep Latency Test (MSLT) data from 2 randomized, double-blind, placebo-controlled trials in 463 patients with SWD, 1 with armodafinil 150 mg/d and 1 with modafinil 200 mg/d (both administered around 2200 h before night shifts), were used to build a pharmacokinetic/pharmacodynamic model. Predicted plasma drug concentrations were obtained by developing and applying a population pharmacokinetic model using nonlinear mixed-effects modeling. Armodafinil 200 mg produced a plasma concentration above the EC(50) (4.6 µg/mL) for 9 hours, whereas modafinil 200 mg did not exceed the EC(50). Consequently, armodafinil produced greater increases in predicted placebo-subtracted MSLT times of 0.5-1 minute (up to 10 hours after dosing) compared with modafinil. On a milligram-to-milligram basis, armodafinil 200 mg consistently increased wakefulness more than modafinil 200 mg, including times late in the 8-hour shift.
Adding propensity scores to pure prediction models fails to improve predictive performance
Directory of Open Access Journals (Sweden)
Amy S. Nowacki
2013-08-01
Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.
A Predictive Model of Geosynchronous Magnetopause Crossings
Dmitriev, A; Chao, J -K
2013-01-01
We have developed a model predicting whether or not the magnetopause crosses geosynchronous orbit at given location for given solar wind pressure Psw, Bz component of interplanetary magnetic field (IMF) and geomagnetic conditions characterized by 1-min SYM-H index. The model is based on more than 300 geosynchronous magnetopause crossings (GMCs) and about 6000 minutes when geosynchronous satellites of GOES and LANL series are located in the magnetosheath (so-called MSh intervals) in 1994 to 2001. Minimizing of the Psw required for GMCs and MSh intervals at various locations, Bz and SYM-H allows describing both an effect of magnetopause dawn-dusk asymmetry and saturation of Bz influence for very large southward IMF. The asymmetry is strong for large negative Bz and almost disappears when Bz is positive. We found that the larger amplitude of negative SYM-H the lower solar wind pressure is required for GMCs. We attribute this effect to a depletion of the dayside magnetic field by a storm-time intensification of t...
Predictive modeling for EBPC in EBDW
Zimmermann, Rainer; Schulz, Martin; Hoppe, Wolfgang; Stock, Hans-Jürgen; Demmerle, Wolfgang; Zepka, Alex; Isoyan, Artak; Bomholt, Lars; Manakli, Serdar; Pain, Laurent
2009-10-01
We demonstrate a flow for e-beam proximity correction (EBPC) to e-beam direct write (EBDW) wafer manufacturing processes, demonstrating a solution that covers all steps from the generation of a test pattern for (experimental or virtual) measurement data creation, over e-beam model fitting, proximity effect correction (PEC), and verification of the results. We base our approach on a predictive, physical e-beam simulation tool, with the possibility to complement this with experimental data, and the goal of preparing the EBPC methods for the advent of high-volume EBDW tools. As an example, we apply and compare dose correction and geometric correction for low and high electron energies on 1D and 2D test patterns. In particular, we show some results of model-based geometric correction as it is typical for the optical case, but enhanced for the particularities of e-beam technology. The results are used to discuss PEC strategies, with respect to short and long range effects.
Prediction of Catastrophes: an experimental model
Peters, Randall D; Pomeau, Yves
2012-01-01
Catastrophes of all kinds can be roughly defined as short duration-large amplitude events following and followed by long periods of "ripening". Major earthquakes surely belong to the class of 'catastrophic' events. Because of the space-time scales involved, an experimental approach is often difficult, not to say impossible, however desirable it could be. Described in this article is a "laboratory" setup that yields data of a type that is amenable to theoretical methods of prediction. Observations are made of a critical slowing down in the noisy signal of a solder wire creeping under constant stress. This effect is shown to be a fair signal of the forthcoming catastrophe in both of two dynamical models. The first is an "abstract" model in which a time dependent quantity drifts slowly but makes quick jumps from time to time. The second is a realistic physical model for the collective motion of dislocations (the Ananthakrishna set of equations for creep). Hope thus exists that similar changes in the response to ...
Corporate prediction models, ratios or regression analysis?
Bijnen, E.J.; Wijn, M.F.C.M.
1994-01-01
The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in
Boonpawa, Rungnapa; Spenkelink, Bert; Punt, Ans; Rietjens, Ivonne
2017-01-01
Scope: To develop a physiologically based kinetic (PBK) model that describes the absorption, distribution, metabolism, and excretion of hesperidin in humans, enabling the translation of in vitro concentration-response curves to in vivo dose-response curves. Methods and results: The PBK model for
Evaluation of CASP8 model quality predictions
Cozzetto, Domenico
2009-01-01
The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.
Noncausal spatial prediction filtering based on an ARMA model
Institute of Scientific and Technical Information of China (English)
Liu Zhipeng; Chen Xiaohong; Li Jingye
2009-01-01
Conventional f-x prediction filtering methods are based on an autoregressive model. The error section is first computed as a source noise but is removed as additive noise to obtain the signal, which results in an assumption inconsistency before and after filtering. In this paper, an autoregressive, moving-average model is employed to avoid the model inconsistency. Based on the ARMA model, a noncasual prediction filter is computed and a self-deconvolved projection filter is used for estimating additive noise in order to suppress random noise. The 1-D ARMA model is also extended to the 2-D spatial domain, which is the basis for noncasual spatial prediction filtering for random noise attenuation on 3-D seismic data. Synthetic and field data processing indicate this method can suppress random noise more effectively and preserve the signal simultaneously and does much better than other conventional prediction filtering methods.
Kassemi, M.; Thompson, D.; Goodenow, D.; Gokoglu, S.; Myers, J.
2016-01-01
Renal stone disease is not only a concern on earth but can conceivably pose a serious risk to the astronauts health and safety in Space. In this work, two different deterministic models based on a Population Balance Equation (PBE) analysis of renal stone formation are developed to assess the risks of critical renal stone incidence for astronauts during space travel. In the first model, the nephron is treated as a continuous mixed suspension mixed product removal crystallizer and the PBE for the nucleating, growing and agglomerating renal calculi is coupled to speciation calculations performed by JESS. Predictions of stone size distributions in the kidney using this model indicate that the astronaut in microgravity is at noticeably greater but still subcritical risk and recommend administration of citrate and augmented hydration as effective means of minimizing and containing this risk. In the second model, the PBE analysis is coupled to a Computational Fluid Dynamics (CFD) model for flow of urine and transport of Calcium and Oxalate in the nephron to predict the impact of gravity on the stone size distributions. Results presented for realistic 3D tubule and collecting duct geometries, clearly indicate that agglomeration is the primary mode of size enhancement in both 1g and microgravity. 3D numerical simulations seem to further indicate that there will be an increased number of smaller stones developed in microgravity that will likely pass through the nephron in the absence of wall adhesion. However, upon reentry to a 1g (Earth) or 38g (Mars) partial gravitational fields, the renal calculi can lag behind the urinary flow in tubules that are adversely oriented with respect to the gravitational field and grow agglomerate to large sizes that are sedimented near the wall with increased propensity for wall adhesion, plaque formation, and risk to the astronauts.
Genetic models of homosexuality: generating testable predictions
Gavrilets, Sergey; Rice, William R.
2006-01-01
Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality inclu...
Wind farm production prediction - The Zephyr model
Energy Technology Data Exchange (ETDEWEB)
Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)
2002-06-01
This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)
Digital Repository Service at National Institute of Oceanography (India)
Patil, S.G.; Mandal, S.; Hegde, A.V.
Understanding the physics of complex system plays an important role in selection of data for training intelligent computing models. Based on the physics of the wave transmission of Horizontally Interlaced Multilayer Moored Floating Pipe Breakwater...
2015-07-01
and is shown in Fig. 2. Only one-half of the domain was modeled, taking advantage of the symmetry of the geometry . The computational domains were...Harten-Lax-van Leer-Contact Riemann solver and a multidimensional Total-Variation-Diminishing continuous flux limiter.17 The compressible, perfect...and radial boundaries were modeled using a characteristics-based inflow/outflow boundary condition, which is based on solving a Riemann problem at the
Göttlich, Claudia; Müller, Lena C; Kunz, Meik; Schmitt, Franziska; Walles, Heike; Walles, Thorsten; Dandekar, Thomas; Dandekar, Gudrun; Nietzer, Sarah L
2016-01-01
In the present study, we combined an in vitro 3D lung tumor model with an in silico model to optimize predictions of drug response based on a specific mutational background. The model is generated on a decellularized porcine scaffold that reproduces tissue-specific characteristics regarding extracellular matrix composition and architecture including the basement membrane. We standardized a protocol that allows artificial tumor tissue generation within 14 days including three days of drug treatment. Our article provides several detailed descriptions of 3D read-out screening techniques like the determination of the proliferation index Ki67 staining's, apoptosis from supernatants by M30-ELISA and assessment of epithelial to mesenchymal transition (EMT), which are helpful tools for evaluating the effectiveness of therapeutic compounds. We could show compared to 2D culture a reduction of proliferation in our 3D tumor model that is related to the clinical situation. Despite of this lower proliferation, the model predicted EGFR-targeted drug responses correctly according to the biomarker status as shown by comparison of the lung carcinoma cell lines HCC827 (EGFR -mutated, KRAS wild-type) and A549 (EGFR wild-type, KRAS-mutated) treated with the tyrosine-kinase inhibitor (TKI) gefitinib. To investigate drug responses of more advanced tumor cells, we induced EMT by long-term treatment with TGF-beta-1 as assessed by vimentin/pan-cytokeratin immunofluorescence staining. A flow-bioreactor was employed to adjust culture to physiological conditions, which improved tissue generation. Furthermore, we show the integration of drug responses upon gefitinib treatment or TGF-beta-1 stimulation - apoptosis, proliferation index and EMT - into a Boolean in silico model. Additionally, we explain how drug responses of tumor cells with a specific mutational background and counterstrategies against resistance can be predicted. We are confident that our 3D in vitro approach especially with its
Energy Technology Data Exchange (ETDEWEB)
Karuthapandi, Sripriyan; Thyla, P. R. [PSG College of Technology, Coimbatore (India); Ramu, Murugan [Amrita University, Ettimadai (India)
2017-05-15
This paper describes the relationships between the macrostructural characteristics of weld beads and the welding parameters in Gas metal arc welding (GMAW) using a flat wire electrode. Bead-on-plate welds were produced with a flat wire electrode and different combinations of input parameters (i.e., welding current, welding speed, and flat wire electrode orientation). The macrostructural characteristics of the weld beads, namely, deposition, bead width, total bead width, reinforcement height, penetration depth, and depth of HAZ were investigated. A mapping technique was employed to measure these characteristics in various segments of the weldment zones. Results show that the use of a flat wire electrode improves the depth-to-width (D/W) ratio by 16.5 % on average compared with the D/W ratio when a regular electrode is used in GMAW. Furthermore, a fuzzy logic model was established to predict the effects of the use of a flat electrode on the weldment shape profile with varying input parameters. The predictions of the model were compared with the experimental results.
Model for predicting mountain wave field uncertainties
Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal
2017-04-01
Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of
Predictive model for segmented poly(urea
Directory of Open Access Journals (Sweden)
Frankl P.
2012-08-01
Full Text Available Segmented poly(urea has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM – a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.
Skaal, Linda; Pengpid, Supa
2012-12-01
There have been studies conducted on the effectiveness of the transtheoretical model (TTM) in improving the level of physical activity at worksites worldwide, but no such studies have been conducted in South Africa. The aim of this study was to determine the predictive validity and effects of using the Transtheoretical Model to increase the physical activity of healthcare workers in a public hospital in South Africa. A quasi-experimental design in the form of a single-group, pretest-posttest model was used to examine the possible relationship between an exposure to interventions, attitude, knowledge, and an increased level of physical activity. Two hundred hospital staff members (medical and nonmedical staff) were randomly selected for participation in the study. The following variables were measured: TTM stages of physical activity, knowledge and attitudes, fitness level, body mass index, and level of exposure to the intervention. The interventions designed were based on the concept of progressing stages of physical activity in TTM stage sequences: (1) pamphlets about physical activity and health, (2) posters, fun runs, and sports day, and (3) a second set of posters, a daily radio program, and aerobic classes. Post-intervention, participants had significantly increased their stages of physical activity, attitudes, and knowledge compared with their pre-tests. Mean scores of TTM (3.70) and knowledge (3.65) were significantly (p < 0.05) greater at post-test. Overall accuracies of TTM at pre-test correctly predicted TTM at post-test by an average of 66.9%. The use of TTM to identify the stage of physical activity of healthcare workers has enabled the researcher to design intervention programs specific to the stage of exercise behavior of hospital staff. The predictors (TTM1), exposure levels, knowledge, attitudes, and processes of change have significant contributions to the outcome (TTM2).
PREDICTIVE CAPACITY OF ARCH FAMILY MODELS
Directory of Open Access Journals (Sweden)
Raphael Silveira Amaro
2016-03-01
Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.
Predictive QSAR modeling of phosphodiesterase 4 inhibitors.
Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr
2012-02-01
A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. Copyright © 2011 Elsevier Inc. All rights reserved.
Saxena, P.R; Heiligers, J.P C; Maassen Vandenbrink, A; Bax, W.A; Barf, T.A; Wikström, H.V
1996-01-01
Several acutely acting antimigraine drugs, including sumatriptan and other second generation 5-HT1D receptor agonists, have the ability to constrict porcine carotid arteriovenous anastomoses as well as the human isolated coronary artery. These two experimental models seem to serve as indicators, res
DEFF Research Database (Denmark)
May, Margaret; Sterne, Jonathan A C; Shipley, Martin;
2007-01-01
Many HIV-infected patients on highly active antiretroviral therapy (HAART) experience metabolic complications including dyslipidaemia and insulin resistance, which may increase their coronary heart disease (CHD) risk. We developed a prognostic model for CHD tailored to the changes in risk factors...
Models for short term malaria prediction in Sri Lanka
Directory of Open Access Journals (Sweden)
Galappaththy Gawrie NL
2008-05-01
Full Text Available Abstract Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed.
The application of modeling and prediction with MRA wavelet network
Institute of Scientific and Technical Information of China (English)
LU Shu-ping; YANG Xue-jing; ZHAO Xi-ren
2004-01-01
As there are lots of non-linear systems in the real engineering, it is very important to do more researches on the modeling and prediction of non-linear systems. Based on the multi-resolution analysis (MRA) of wavelet theory, this paper combined the wavelet theory with neural network and established a MRA wavelet network with the scaling function and wavelet function as its neurons. From the analysis in the frequency domain, the results indicated that MRA wavelet network was better than other wavelet networks in the ability of approaching to the signals. An essential research was carried out on modeling and prediction with MRA wavelet network in the non-linear system. Using the lengthwise sway data received from the experiment of ship model, a model of offline prediction was established and was applied to the short-time prediction of ship motion. The simulation results indicated that the forecasting model improved the prediction precision effectively, lengthened the forecasting time and had a better prediction results than that of AR linear model.The research indicates that it is feasible to use the MRA wavelet network in the short -time prediction of ship motion.
Some Remarks on CFD Drag Prediction of an Aircraft Model
Peng, S. H.; Eliasson, P.
Observed in CFD drag predictions for the DLR-F6 aircraft model with various configurations, some issues are addressed. The emphasis is placed on the effect of turbulence modeling and grid resolution. With several different turbulence models, the predicted flow feature around the aircraft is highlighted. It is shown that the prediction of the separation bubble in the wing-body junction is closely related to the inherent modeling mechanism of turbulence production. For the configuration with an additional fairing, which has effectively removed the separation bubble, it is illustrated that the drag prediction may be altered even for attached turbulent boundary layer when different turbulence models are used. Grid sensitivity studies are performed with two groups of subsequently refined grids. It is observed that, in contrast to the lift, the drag prediction is rather sensitive to the grid refinement, as well as to the artificial diffusion added for solving the turbulence transport equation. It is demonstrated that an effective grid refinement should drive the predicted drag components monotonically and linearly converged to a finite value.
On the Predictiveness of Single-Field Inflationary Models
Burgess, C.P.; Trott, Michael
2014-01-01
We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for $A_s$, $r$ and $n_s$ are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in prin...
A Composite Model Predictive Control Strategy for Furnaces
Institute of Scientific and Technical Information of China (English)
Hao Zang; Hongguang Li; Jingwen Huang; Jia Wang
2014-01-01
Tube furnaces are essential and primary energy intensive facilities in petrochemical plants. Operational optimi-zation of furnaces could not only help to improve product quality but also benefit to reduce energy consumption and exhaust emission. Inspired by this idea, this paper presents a composite model predictive control (CMPC) strategy, which, taking advantage of distributed model predictive control architectures, combines tracking nonlinear model predictive control and economic nonlinear model predictive control metrics to keep process running smoothly and optimize operational conditions. The control ers connected with two kinds of communi-cation networks are easy to organize and maintain, and stable to process interferences. A fast solution algorithm combining interior point solvers and Newton's method is accommodated to the CMPC realization, with reason-able CPU computing time and suitable online applications. Simulation for industrial case demonstrates that the proposed approach can ensure stable operations of furnaces, improve heat efficiency, and reduce the emission effectively.
Pujol, Laure; Kan-King-Yu, Denis; Le Marc, Yvan; Johnston, Moira D; Rama-Heuzard, Florence; Guillou, Sandrine; McClure, Peter; Membré, Jeanne-Marie
2012-02-01
Preservative factors act as hurdles against microorganisms by inhibiting their growth; these are essential control measures for particular food-borne pathogens. Different combinations of hurdles can be quantified and compared to each other in terms of their inhibitory effect ("iso-hurdle"). We present here a methodology for establishing microbial iso-hurdle rules in three steps: (i) developing a predictive model based on existing but disparate data sets, (ii) building an experimental design focused on the iso-hurdles using the model output, and (iii) validating the model and the iso-hurdle rules with new data. The methodology is illustrated with Listeria monocytogenes. Existing data from industry, a public database, and the literature were collected and analyzed, after which a total of 650 growth rates were retained. A gamma-type model was developed for the factors temperature, pH, a(w), and acetic, lactic, and sorbic acids. Three iso-hurdle rules were assessed (40 logcount curves generated): salt replacement by addition of organic acids, sorbic acid replacement by addition of acetic and lactic acid, and sorbic acid replacement by addition of lactic/acetic acid and salt. For the three rules, the growth rates were equivalent in the whole experimental domain (γ from 0.1 to 0.5). The lag times were also equivalent in the case of mild inhibitory conditions (γ ≥ 0.2), while they were longer in the presence of salt than acids under stress conditions (γ microbial safety and stability.
DEFF Research Database (Denmark)
Pietroni, Carlotta; Andersen, Jeppe D.; Johansen, Peter
2014-01-01
In two recent studies of Spanish individuals [1,2], gender was suggested as a factor that contributes to human eye colour variation. However, gender did not improve the predictive accuracy on blue, intermediate and brown eye colours when gender was included in the IrisPlex model [3]. In this stud...... and their corresponding predictive values using the IrisPlex prediction model [4]. The results suggested that maximum three (rs12913832, rs1800407, rs16891982) of the six IrisPlex SNPs are useful in practical forensic genetic casework....
Modelling the predictive performance of credit scoring
Directory of Open Access Journals (Sweden)
Shi-Wei Shen
2013-02-01
Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.
Calibrated predictions for multivariate competing risks models.
Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni
2014-04-01
Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.
Directory of Open Access Journals (Sweden)
Saeid Komasi
2016-09-01
Full Text Available Introduction: Studies on behavioral patterns and personality traits play a critical role in the prediction of healthy or unhealthy behaviors and identification of high-risk individuals for cardiovascular diseases (CVDs in order to implement preventive strategies. This study aimed to compare personality types in individuals with and without CVD based on the enneagram of personality. Materials and Methods: This case-control study was conducted on 96 gender-matched participants (48 CVD patients and 48 healthy subjects.Data were collected using the Riso-Hudson Enneagram Type Indicator (RHETI. Data analysis was performed in SPSS V.20 using MANOVA, Chi-square, and T-test. Results: After adjustment for age and gender there is a significant difference between two groups (and male in term of personality types one and five. In CVD patients, score of personality type one (F(1,94=9.476 (P=0.003 was significantly higher, while score of personality type five was significantly lower (F(1,94=6.231 (P=0.014, compared to healthy subjects. However, this significant difference was only observed in the score of personality type one in female patients (F(1,66=4.382 (P=0.04. Conclusion: Identifying healthy personality type one individuals before CVD development, providing necessary training on the potential risk factors of CVDs, and implementation of preventive strategies (e.g., anger management skills could lead to positive outcomes for the society and healthcare system. It is recommended that further investigation be conducted in this regard.
Verwei, M.; Burgsteden, J.A. van; Krul, C.A.M.; Sandt, J.J.M. van de; Freidig, A.P.
2006-01-01
The new EU legislations for chemicals (Registration, Evaluation and Authorization of Chemicals, REACH) and cosmetics (Seventh Amendment) stimulate the acceptance of in vitro and in silico approaches to test chemicals for their potential to cause reproductive effects. In the current study seven compo
Global Solar Dynamo Models: Simulations and Predictions
Indian Academy of Sciences (India)
Mausumi Dikpati; Peter A. Gilman
2008-03-01
Flux-transport type solar dynamos have achieved considerable success in correctly simulating many solar cycle features, and are now being used for prediction of solar cycle timing and amplitude.We first define flux-transport dynamos and demonstrate how they work. The essential added ingredient in this class of models is meridional circulation, which governs the dynamo period and also plays a crucial role in determining the Sun’s memory about its past magnetic fields.We show that flux-transport dynamo models can explain many key features of solar cycles. Then we show that a predictive tool can be built from this class of dynamo that can be used to predict mean solar cycle features by assimilating magnetic field data from previous cycles.
DEFF Research Database (Denmark)
Pietroni, Carlotta; Andersen, Jeppe D.; Johansen, Peter
2014-01-01
, we investigate the role of gender as a factor that contributes to eye colour variation and suggest that the gender effect on eye colour is population specific. A total of 230 Italian individuals were typed for the six IrisPlex SNPs (rs12913832, rs1800407, rs12896399, rs1393350, rs16891982 and rs......In two recent studies of Spanish individuals [1,2], gender was suggested as a factor that contributes to human eye colour variation. However, gender did not improve the predictive accuracy on blue, intermediate and brown eye colours when gender was included in the IrisPlex model [3]. In this study...... eye colour independently of ancestry. Furthermore, we found gender to be significantly associated with quantitative eye colour measurements in the Italian population sample. We found that the association was statistically significant only among Italian individuals typed as heterozygote GA for HERC2 rs...
DEFF Research Database (Denmark)
Pietroni, Carlotta; Andersen, Jeppe D.; Johansen, Peter;
2014-01-01
, we investigate the role of gender as a factor that contributes to eye colour variation and suggest that the gender effect on eye colour is population specific. A total of 230 Italian individuals were typed for the six IrisPlex SNPs (rs12913832, rs1800407, rs12896399, rs1393350, rs16891982 and rs......In two recent studies of Spanish individuals [1,2], gender was suggested as a factor that contributes to human eye colour variation. However, gender did not improve the predictive accuracy on blue, intermediate and brown eye colours when gender was included in the IrisPlex model [3]. In this study...... eye colour independently of ancestry. Furthermore, we found gender to be significantly associated with quantitative eye colour measurements in the Italian population sample. We found that the association was statistically significant only among Italian individuals typed as heterozygote GA for HERC2 rs...
Model Predictive Control of Sewer Networks
Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik; Poulsen, Niels K.; Falk, Anne K. V.
2017-01-01
The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and controlled have thus become essential factors for effcient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control.
Johnson, Traci L
2016-01-01
Until now, systematic errors in strong gravitational lens modeling have been acknowledged but never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with $>15$ image systems, the image plane rms does not decrease significantly when more systems are added; however the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to $10$ image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved por...
Juneja, Vijay K; Altuntaş, Evrim Güneş; Ayhan, Kamuran; Hwang, Cheng-An; Sheen, Shiowshuh; Friedman, Mendel
2013-06-03
We investigated the combined effect of three internal temperatures (57.5, 60, and 62.5°C) and different concentrations (0 to 3.0 wt/wt.%) of sodium chloride (NaCl) and apple polyphenols (APP), individually and in combination, on the heat-resistance of a five-strain cocktail of Listeria monocytogenes in ground beef. A complete factorial design (3×4×4) was used to assess the effects and interactions of heating temperature, NaCl, and APP. All 48 combinations were tested twice, to yield 96 survival curves. Mathematical models were then used to quantitate the combined effect of these parameters on heat resistance of the pathogen. The theoretical analysis shows that compared with heat alone, the addition of NaCl enhanced and that of APP reduced the heat resistance of L. monocytogenes measured as D-values. By contrast, the protective effect of NaCl against thermal inactivation of the pathogen was reduced when both additives were present in combination, as evidenced by reduction of up to ~68% in D-values at 57.5°C; 65% at 60°C; and 25% at 62.5°C. The observed high antimicrobial activity of the combination of APP and low salt levels (e.g., 2.5% APP and 0.5% salt) suggests that commercial and home processors of meat could reduce the salt concentration by adding APP to the ground meat. The influence of the combined effect allows a reduction of the temperature of heat treatments as well as the salt content of the meat. Meat processors can use the predictive model to design processing times and temperatures that can protect against adverse effects of contaminated meat products. Additional benefits include reduced energy use in cooking, and the addition of antioxidative apple polyphenols may provide beneficial health affects to consumers.
Directory of Open Access Journals (Sweden)
Peter Hoonakker
2014-01-01
Full Text Available High employee turnover has always been a major issue for Information Technology (IT. In particular, turnover of women is very high. In this study, we used the Job Demand/Resources (JD-R model to examine the relationship between job demands and job resources, stress/burnout and job satisfaction/commitment, and turnover intention and tested the model for gender differences. Data were collected in five IT companies. A sample of 624 respondents (return rate: 56%; 54% males; mean age: 39.7 years was available for statistical analyses. Results of our study show that relationships between job demands and turnover intention are mediated by emotional exhaustion (burnout and relationships between job resources and turnover intention are mediated by job satisfaction. We found noticeable gender differences in these relationships, which can explain differences in turnover intention between male and female employees. The results of our study have consequences for organizational retention strategies to keep men and women in the IT work force.
DKIST Polarization Modeling and Performance Predictions
Harrington, David
2016-05-01
Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration
Modelling Chemical Reasoning to Predict Reactions
Segler, Marwin H. S.; Waller, Mark P.
2016-01-01
The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outpe...
Predictive Modeling of the CDRA 4BMS
Coker, Robert; Knox, James
2016-01-01
Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.
Raman Model Predicting Hardness of Covalent Crystals
Zhou, Xiang-Feng; Qian, Quang-Rui; Sun, Jian; Tian, Yongjun; Wang, Hui-Tian
2009-01-01
Based on the fact that both hardness and vibrational Raman spectrum depend on the intrinsic property of chemical bonds, we propose a new theoretical model for predicting hardness of a covalent crystal. The quantitative relationship between hardness and vibrational Raman frequencies deduced from the typical zincblende covalent crystals is validated to be also applicable for the complex multicomponent crystals. This model enables us to nondestructively and indirectly characterize the hardness o...
Predictive Modelling of Mycotoxins in Cereals
Fels, van der H.J.; Liu, C.
2015-01-01
In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts
Unreachable Setpoints in Model Predictive Control
DEFF Research Database (Denmark)
Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp
2008-01-01
steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...
Predictive Modelling of Mycotoxins in Cereals
Fels, van der H.J.; Liu, C.
2015-01-01
In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ
Prediction modelling for population conviction data
Tollenaar, N.
2017-01-01
In this thesis, the possibilities of using prediction models for judicial penal case data are investigated. The development and refinement of a risk taxation scale based on these data is discussed. When false positives are weighted equally severe as false negatives, 70% can be classified correctly.
A Predictive Model for MSSW Student Success
Napier, Angela Michele
2011-01-01
This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…
Predictability of extreme values in geophysical models
Sterk, A.E.; Holland, M.P.; Rabassa, P.; Broer, H.W.; Vitolo, R.
2012-01-01
Extreme value theory in deterministic systems is concerned with unlikely large (or small) values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical model
A revised prediction model for natural conception
Bensdorp, A.J.; Steeg, J.W. van der; Steures, P.; Habbema, J.D.; Hompes, P.G.; Bossuyt, P.M.; Veen, F. van der; Mol, B.W.; Eijkemans, M.J.; Kremer, J.A.M.; et al.,
2017-01-01
One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis
Distributed Model Predictive Control via Dual Decomposition
DEFF Research Database (Denmark)
Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle
2014-01-01
This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...
Predictive Modelling of Mycotoxins in Cereals
Fels, van der H.J.; Liu, C.
2015-01-01
In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ
Leptogenesis in minimal predictive seesaw models
Björkeroth, Fredrik; Varzielas, Ivo de Medeiros; King, Stephen F
2015-01-01
We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\
Samapundo, S; Devlieghere, F; De Meulenaer, B; Geeraerd, A H; Van Impe, J F; Debevere, J M
2005-11-15
The major objective of this study was to develop validated models to describe the effect of a(w) and temperature on the radial growth on corn of the two major fumonisin producing Fusaria, namely Fusarium verticilliodes and F. proliferatum. The growth of these two isolates on corn was therefore studied at water activities between 0.810-0.985 and temperatures between 15 and 30 degrees C. Minimum a(w) for growth was 0.869 and 0.854 for F. verticilliodes and F. proliferatum, respectively. No growth took place at a(w) values equal to 0.831 and 0.838 for F. verticilliodes and F. proliferatum, respectively. The colony growth rates, g (mm d(-1)) were determined by fitting a flexible growth model describing the change in colony diameter (mm) with respect to time (days). Secondary models, relating the colony growth rate with a(w) or a(w) and temperature were developed. A third order polynomial equation and the linear Arrhenius-Davey model were used to describe the combined effect of temperature and a(w) on g. The combined modelling approaches, predicting g (mm d(-1)) at any a(w) and/or temperature were validated on independently collected data. All models proved to be good predictors of the growth rates of both isolates on maize within the experimental conditions. The third order polynomial equation had bias factors of 1.042 and 1.054 and accuracy factors of 1.128 and 1.380 for F. verticilliodes and F. proliferatum, respectively. The linear Arrhenius-Davey model had bias factors of 0.978 and 1.002 and accuracy factors of 1.098 and 1.122 for F. verticilliodes and F. proliferatum, respectively. The results confirm the general finding that a(w) has a greater influence on fungal growth than temperature. The developed models can be applied for the prevention of Fusarium growth on maize and the development of models that incorporate other factors important to mould growth on maize.
Using Pareto points for model identification in predictive toxicology.
Palczewska, Anna; Neagu, Daniel; Ridley, Mick
2013-03-22
: Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology.
Probabilistic Modeling and Visualization for Bankruptcy Prediction
DEFF Research Database (Denmark)
Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara
2017-01-01
In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...... studies on bankruptcy detection, seldom probabilistic approaches were carried out. In this paper we assume a probabilistic point-of-view by applying Gaussian Processes (GP) in the context of bankruptcy prediction, comparing it against the Support Vector Machines (SVM) and the Logistic Regression (LR......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...
Specialized Language Models using Dialogue Predictions
Popovici, C; Popovici, Cosmin; Baggia, Paolo
1996-01-01
This paper analyses language modeling in spoken dialogue systems for accessing a database. The use of several language models obtained by exploiting dialogue predictions gives better results than the use of a single model for the whole dialogue interaction. For this reason several models have been created, each one for a specific system question, such as the request or the confirmation of a parameter. The use of dialogue-dependent language models increases the performance both at the recognition and at the understanding level, especially on answers to system requests. Moreover other methods to increase performance, like automatic clustering of vocabulary words or the use of better acoustic models during recognition, does not affect the improvements given by dialogue-dependent language models. The system used in our experiments is Dialogos, the Italian spoken dialogue system used for accessing railway timetable information over the telephone. The experiments were carried out on a large corpus of dialogues coll...
Caries risk assessment models in caries prediction
Directory of Open Access Journals (Sweden)
Amila Zukanović
2013-11-01
Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.
Disease prediction models and operational readiness.
Directory of Open Access Journals (Sweden)
Courtney D Corley
Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology
Model Predictive Control based on Finite Impulse Response Models
DEFF Research Database (Denmark)
Prasath, Guru; Jørgensen, John Bagterp
2008-01-01
We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...
Electrostatic ion thrusters - towards predictive modeling
Energy Technology Data Exchange (ETDEWEB)
Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)
2014-02-15
The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Gas explosion prediction using CFD models
Energy Technology Data Exchange (ETDEWEB)
Niemann-Delius, C.; Okafor, E. [RWTH Aachen Univ. (Germany); Buhrow, C. [TU Bergakademie Freiberg Univ. (Germany)
2006-07-15
A number of CFD models are currently available to model gaseous explosions in complex geometries. Some of these tools allow the representation of complex environments within hydrocarbon production plants. In certain explosion scenarios, a correction is usually made for the presence of buildings and other complexities by using crude approximations to obtain realistic estimates of explosion behaviour as can be found when predicting the strength of blast waves resulting from initial explosions. With the advance of computational technology, and greater availability of computing power, computational fluid dynamics (CFD) tools are becoming increasingly available for solving such a wide range of explosion problems. A CFD-based explosion code - FLACS can, for instance, be confidently used to understand the impact of blast overpressures in a plant environment consisting of obstacles such as buildings, structures, and pipes. With its porosity concept representing geometry details smaller than the grid, FLACS can represent geometry well, even when using coarse grid resolutions. The performance of FLACS has been evaluated using a wide range of field data. In the present paper, the concept of computational fluid dynamics (CFD) and its application to gas explosion prediction is presented. Furthermore, the predictive capabilities of CFD-based gaseous explosion simulators are demonstrated using FLACS. Details about the FLACS-code, some extensions made to FLACS, model validation exercises, application, and some results from blast load prediction within an industrial facility are presented. (orig.)
Characterizing Attention with Predictive Network Models.
Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M
2017-04-01
Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Study On Distributed Model Predictive Consensus
Keviczky, Tamas
2008-01-01
We investigate convergence properties of a proposed distributed model predictive control (DMPC) scheme, where agents negotiate to compute an optimal consensus point using an incremental subgradient method based on primal decomposition as described in Johansson et al. [2006, 2007]. The objective of the distributed control strategy is to agree upon and achieve an optimal common output value for a group of agents in the presence of constraints on the agent dynamics using local predictive controllers. Stability analysis using a receding horizon implementation of the distributed optimal consensus scheme is performed. Conditions are given under which convergence can be obtained even if the negotiations do not reach full consensus.
Modeling Seizure Self-Prediction: An E-Diary Study
Haut, Sheryl R.; Hall, Charles B.; Borkowski, Thomas; Tennen, Howard; Lipton, Richard B.
2013-01-01
Purpose A subset of patients with epilepsy successfully self-predicted seizures in a paper diary study. We conducted an e-diary study to ensure that prediction precedes seizures, and to characterize the prodromal features and time windows that underlie self-prediction. Methods Subjects 18 or older with LRE and ≥3 seizures/month maintained an e-diary, reporting AM/PM data daily, including mood, premonitory symptoms, and all seizures. Self-prediction was rated by, “How likely are you to experience a seizure [time frame]”? Five choices ranged from almost certain (>95% chance) to very unlikely. Relative odds of seizure (OR) within time frames was examined using Poisson models with log normal random effects to adjust for multiple observations. Key Findings Nineteen subjects reported 244 eligible seizures. OR for prediction choices within 6hrs was as high as 9.31 (1.92,45.23) for “almost certain”. Prediction was most robust within 6hrs of diary entry, and remained significant up to 12hrs. For 9 best predictors, average sensitivity was 50%. Older age contributed to successful self-prediction, and self-prediction appeared to be driven by mood and premonitory symptoms. In multivariate modeling of seizure occurrence, self-prediction (2.84; 1.68,4.81), favorable change in mood (0.82; 0.67,0.99) and number of premonitory symptoms (1,11; 1.00,1.24) were significant. Significance Some persons with epilepsy can self-predict seizures. In these individuals, the odds of a seizure following a positive prediction are high. Predictions were robust, not attributable to recall bias, and were related to self awareness of mood and premonitory features. The 6-hour prediction window is suitable for the development of pre-emptive therapy. PMID:24111898
A Hybrid Neural Network Prediction Model of Air Ticket Sales
Directory of Open Access Journals (Sweden)
Han-Chen Huang
2013-11-01
Full Text Available Air ticket sales revenue is an important source of revenue for travel agencies, and if future air ticket sales revenue can be accurately forecast, travel agencies will be able to advance procurement to achieve a sufficient amount of cost-effective tickets. Therefore, this study applied the Artificial Neural Network (ANN and Genetic Algorithms (GA to establish a prediction model of travel agency air ticket sales revenue. By verifying the empirical data, this study proved that the established prediction model has accurate prediction power, and MAPE (mean absolute percentage error is only 9.11%. The established model can provide business operators with reliable and efficient prediction data as a reference for operational decisions.
NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES
Directory of Open Access Journals (Sweden)
R. G. SILVA
1999-03-01
Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.
Denkins, P.; Badhwar, G.; Obot, V.; Wilson, B.; Jejelewo, O.
2001-01-01
NASA is very interested in improving its ability to monitor and forecast the radiation levels that pose a health risk to space-walking astronauts as they construct the International Space Station and astronauts that will participate in long-term and deep-space missions. Human exploratory missions to the moon and Mars within the next quarter century, will expose crews to transient radiation from solar particle events which include high-energy galactic cosmic rays and high-energy protons. Because the radiation levels in space are high and solar activity is presently unpredictable, adequate shielding is needed to minimize the deleterious health effects of exposure to radiation. Today, numerous models have been developed and used to predict radiation exposure. Such a model is the Space Environment Information Systems (SPENVIS) modeling program, developed by the Belgian Institute for Space Aeronautics. SPENVIS, which has been assessed to be an excellent tool in characterizing the radiation environment for microelectronics and investigating orbital debris, is being evaluated for its usefulness with determining the dose and dose-equivalent for human exposure. Thus far. the calculations for dose-depth relations under varying shielding conditions have been in agreement with calculations done using HZETRN and PDOSE, which are well-known and widely used models for characterizing the environments for human exploratory missions. There is disagreement when assessing the impact of secondary radiation particles since SPENVIS does a crude estimation of the secondary radiation particles when calculating LET versus Flux. SPENVIS was used to model dose-depth relations for the blood-forming organs. Radiation sickness and cancer are life-threatening consequences resulting from radiation exposure. In space. exposure to radiation generally includes all of the critical organs. Biological and toxicological impacts have been included for discussion along with alternative risk mitigation
Modeling and prediction of surgical procedure times
P.S. Stepaniak (Pieter); C. Heij (Christiaan); G. de Vries (Guus)
2009-01-01
textabstractAccurate prediction of medical operation times is of crucial importance for cost efficient operation room planning in hospitals. This paper investigates the possible dependence of procedure times on surgeon factors like age, experience, gender, and team composition. The effect of these f
Performance model to predict overall defect density
Directory of Open Access Journals (Sweden)
J Venkatesh
2012-08-01
Full Text Available Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.
Neuro-fuzzy modeling in bankruptcy prediction
Directory of Open Access Journals (Sweden)
Vlachos D.
2003-01-01
Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.
Storck, Pascal; Bowling, Laura; Wetherbee, Paul; Lettenmaier, Dennis
1998-05-01
Spatially distributed rainfall-runoff models, made feasible by the widespread availability of land surface characteristics data (especially digital topography), and the evolution of high power desktop workstations, are particularly useful for assessment of the hydrological effects of land surface change. Three examples are provided of the use of the Distributed Hydrology-Soil-Vegetation Model (DHSVM) to assess the hydrological effects of logging in the Pacific Northwest. DHSVM provides a dynamic representation of the spatial distribution of soil moisture, snow cover, evapotranspiration and runoff production, at the scale of digital topographic data (typically 30-100 m). Among the hydrological concerns that have been raised related to forest harvest in the Pacific Northwest are increases in flood peaks owing to enhanced rain-on-snow and spring radiation melt response, and the effects of forest roads. The first example is for two rain-on-snow floods in the North Fork Snoqualmie River during November 1990 and December 1989. Predicted maximum vegetation sensitivities (the difference between predicted peaks for all mature vegetation compared with all clear-cut) showed a 31% increase in the peak runoff for the 1989 event and a 10% increase for the larger 1990 event. The main reason for the difference in response can be traced to less antecedent low elevation snow during the 1990 event. The second example is spring snowmelt runoff for the Little Naches River, Washington, which drains the east slopes of the Washington Cascades. Analysis of spring snowmelt peak runoff during May 1993 and April 1994 showed that, for current vegetation relative to all mature vegetation, increases in peak spring stream flow of only about 3% should have occurred over the entire basin. However, much larger increases (up to 30%) would occur for a maximum possible harvest scenario, and in a small headwaters catchment, whose higher elevation leads to greater snow coverage (and, hence, sensitivity
Pressure prediction model for compression garment design.
Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q
2010-01-01
Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.
Statistical assessment of predictive modeling uncertainty
Barzaghi, Riccardo; Marotta, Anna Maria
2017-04-01
When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.
Seasonal Predictability in a Model Atmosphere.
Lin, Hai
2001-07-01
The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.
Evaluation of burst pressure prediction models for line pipes
Energy Technology Data Exchange (ETDEWEB)
Zhu, Xian-Kui, E-mail: zhux@battelle.org [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States); Leis, Brian N. [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States)
2012-01-15
Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487-492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: Black-Right-Pointing-Pointer This paper evaluates different burst pressure prediction models for line pipes. Black-Right-Pointing-Pointer The existing models are categorized into two major groups of Tresca and von Mises solutions. Black-Right-Pointing-Pointer Prediction quality of each model is assessed statistically using a large full-scale burst test database. Black-Right-Pointing-Pointer The Zhu-Leis solution is identified as the best predictive model.
Ivankina, T. I.; Zel, I. Yu.; Lokajicek, T.; Kern, H.; Lobanov, K. V.; Zharikov, A. V.
2017-08-01
In this paper we present experimental and theoretical studies on a highly anisotropic layered rock sample characterized by alternating layers of biotite and muscovite (retrogressed from sillimanite) and plagioclase and quartz, respectively. We applied two different experimental methods to determine seismic anisotropy at pressures up to 400 MPa: (1) measurement of P- and S-wave phase velocities on a cube in three foliation-related orthogonal directions and (2) measurement of P-wave group velocities on a sphere in 132 directions The combination of the spatial distribution of P-wave velocities on the sphere (converted to phase velocities) with S-wave velocities of three orthogonal structural directions on the cube made it possible to calculate the bulk elastic moduli of the anisotropic rock sample. On the basis of the crystallographic preferred orientations (CPOs) of major minerals obtained by time-of-flight neutron diffraction, effective media modeling was performed using different inclusion methods and averaging procedures. The implementation of a nonlinear approximation of the P-wave velocity-pressure relation was applied to estimate the mineral matrix properties and the orientation distribution of microcracks. Comparison of theoretical calculations of elastic properties of the mineral matrix with those derived from the nonlinear approximation showed discrepancies in elastic moduli and P-wave velocities of about 10%. The observed discrepancies between the effective media modeling and ultrasonic velocity data are a consequence of the inhomogeneous structure of the sample and inability to perform long-wave approximation. Furthermore, small differences between elastic moduli predicted by the different theoretical models, including specific fabric characteristics such as crystallographic texture, grain shape and layering were observed. It is shown that the bulk elastic anisotropy of the sample is basically controlled by the CPO of biotite and muscovite and their volume
Computational prediction of solubilizers' effect on partitioning.
Hoest, Jan; Christensen, Inge T; Jørgensen, Flemming S; Hovgaard, Lars; Frokjaer, Sven
2007-02-01
A computational model for the prediction of solubilizers' effect on drug partitioning has been developed. Membrane/water partitioning was evaluated by means of immobilized artificial membrane (IAM) chromatography. Four solubilizers were used to alter the partitioning in the IAM column. Two types of molecular descriptors were calculated: 2D descriptors using the MOE software and 3D descriptors using the Volsurf software. Structure-property relationships between each of the two types of descriptors and partitioning were established using partial least squares, projection to latent structures (PLS) statistics. Statistically significant relationships between the molecular descriptors and the IAM data were identified. Based on the 2D descriptors structure-property relationships R(2)Y=0. 99 and Q(2)=0.82-0.83 were obtained for some of the solubilizers. The most important descriptor was related to logP. For the Volsurf 3D descriptors models with R(2)Y=0.53-0.64 and Q(2)=0.40-0.54 were obtained using five descriptors. The present study showed that it is possible to predict partitioning of substances in an artificial phospholipid membrane, with or without the use of solubilizers.
Signature prediction for model-based automatic target recognition
Keydel, Eric R.; Lee, Shung W.
1996-06-01
The moving and stationary target recognition (MSTAR) model- based automatic target recognition (ATR) system utilizes a paradigm which matches features extracted form an unknown SAR target signature against predictions of those features generated from models of the sensing process and candidate target geometries. The candidate target geometry yielding the best match between predicted and extracted features defines the identify of the unknown target. MSTAR will extend the current model-based ATR state-of-the-art in a number of significant directions. These include: use of Bayesian techniques for evidence accrual, reasoning over target subparts, coarse-to-fine hypothesis search strategies, and explicit reasoning over target articulation, configuration, occlusion, and lay-over. These advances also imply significant technical challenges, particularly for the MSTAR feature prediction module (MPM). In addition to accurate electromagnetics, the MPM must provide traceback between input target geometry and output features, on-line target geometry manipulation, target subpart feature prediction, explicit models for local scene effects, and generation of sensitivity and uncertainty measures for the predicted features. This paper describes the MPM design which is being developed to satisfy these requirements. The overall module structure is presented, along with the specific deign elements focused on MSTAR requirements. Particular attention is paid to design elements that enable on-line prediction of features within the time constraints mandated by model-driven ATR. Finally, the current status, development schedule, and further extensions in the module design are described.
A kinetic model for predicting biodegradation.
Dimitrov, S; Pavlov, T; Nedelcheva, D; Reuschenbach, P; Silvani, M; Bias, R; Comber, M; Low, L; Lee, C; Parkerton, T; Mekenyan, O
2007-01-01
Biodegradation plays a key role in the environmental risk assessment of organic chemicals. The need to assess biodegradability of a chemical for regulatory purposes supports the development of a model for predicting the extent of biodegradation at different time frames, in particular the extent of ultimate biodegradation within a '10 day window' criterion as well as estimating biodegradation half-lives. Conceptually this implies expressing the rate of catabolic transformations as a function of time. An attempt to correlate the kinetics of biodegradation with molecular structure of chemicals is presented. A simplified biodegradation kinetic model was formulated by combining the probabilistic approach of the original formulation of the CATABOL model with the assumption of first order kinetics of catabolic transformations. Nonlinear regression analysis was used to fit the model parameters to OECD 301F biodegradation kinetic data for a set of 208 chemicals. The new model allows the prediction of biodegradation multi-pathways, primary and ultimate half-lives and simulation of related kinetic biodegradation parameters such as biological oxygen demand (BOD), carbon dioxide production, and the nature and amount of metabolites as a function of time. The model may also be used for evaluating the OECD ready biodegradability potential of a chemical within the '10-day window' criterion.
Disease Prediction Models and Operational Readiness
Energy Technology Data Exchange (ETDEWEB)
Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.
2014-03-19
INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the
Nonlinear model predictive control theory and algorithms
Grüne, Lars
2017-01-01
This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...
Shi, Deheng; Zou, Fenghui; Zhu, Zunlue; Sun, Jinfeng
2016-01-01
In this study, we tried to develop a model to predict the effect of surface oxidization on the normal spectral emissivity of aluminum 5052 at a temperature range of 800 to 910 K and wavelength of 1.5 \\upmu m. In experiments, specimens were heated in air for 6 h at certain temperatures. Two platinum-rhodium thermocouples were symmetrically welded onto the front surface of the specimens near the measuring area for accurate monitoring of the temperature at the specimen surface. The temperatures measured by the two thermocouples had an uncertainty of 1 K. The normal spectral emissivity values were measured over the 6-h heating period at temperatures from 800 K to 910 K in increments of 10 K. Strong oscillations in the normal spectral emissivity were observed at each temperature. These oscillations were determined to form by the interference between the radiation stemming from the oxide layer and radiation from the substrate. The present measurements were compared with previous experimental results, and the variation in the normal spectral emissivity at given temperatures was evaluated. The uncertainty of the normal spectral emissivity caused only by the surface oxidization was found to be approximately 12.1 % to 21.8 %, and the corresponding uncertainty in the temperature caused only by the surface oxidization was approximately 9.1 K to 15.2 K. The model can reproduce the normal spectral emissivity well, including the strong oscillations that occur during the initial heating period.
Predictive Modeling in Actinide Chemistry and Catalysis
Energy Technology Data Exchange (ETDEWEB)
Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-16
These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.
Probabilistic prediction models for aggregate quarry siting
Robinson, G.R.; Larkins, P.M.
2007-01-01
Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.
Predicting Footbridge Response using Stochastic Load Models
DEFF Research Database (Denmark)
Pedersen, Lars; Frier, Christian
2013-01-01
Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing s...... as it pinpoints which decisions to be concerned about when the goal is to predict footbridge response. The studies involve estimating footbridge responses using Monte-Carlo simulations and focus is on estimating vertical structural response to single person loading....
Nonconvex Model Predictive Control for Commercial Refrigeration
DEFF Research Database (Denmark)
Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp
2013-01-01
is to minimize the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...
Prediction error, ketamine and psychosis: An updated model.
Corlett, Philip R; Honey, Garry D; Fletcher, Paul C
2016-11-01
In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.
Predictive In Vivo Models for Oncology.
Behrens, Diana; Rolff, Jana; Hoffmann, Jens
2016-01-01
Experimental oncology research and preclinical drug development both substantially require specific, clinically relevant in vitro and in vivo tumor models. The increasing knowledge about the heterogeneity of cancer requested a substantial restructuring of the test systems for the different stages of development. To be able to cope with the complexity of the disease, larger panels of patient-derived tumor models have to be implemented and extensively characterized. Together with individual genetically engineered tumor models and supported by core functions for expression profiling and data analysis, an integrated discovery process has been generated for predictive and personalized drug development.Improved “humanized” mouse models should help to overcome current limitations given by xenogeneic barrier between humans and mice. Establishment of a functional human immune system and a corresponding human microenvironment in laboratory animals will strongly support further research.Drug discovery, systems biology, and translational research are moving closer together to address all the new hallmarks of cancer, increase the success rate of drug development, and increase the predictive value of preclinical models.
Constructing predictive models of human running.
Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre
2015-02-06
Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Statistical Seasonal Sea Surface based Prediction Model
Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima
2014-05-01
The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.
Spirka, Thomas; Kenton, Kimberly; Brubaker, Linda; Damaser, Margot S
2013-01-01
Stress urinary incontinence is a condition that affects mainly women and is characterized by the involuntary loss of urine in conjunction with an increase in abdominal pressure but in the absence of a bladder contraction. In spite of the large number of women affected by this condition, little is known regarding the mechanics associated with the maintenance of continence in women. Urodynamic measurements of the pressure acting on the bladder and the pressures developed within the bladder and the urethra offer a potential starting point for constructing computational models of the bladder and urethra during stress events. The measured pressures can be utilized in these models to provide information to specify loads and validate the models. The main goals of this study were to investigate the feasibility of incorporating human urodynamic pressure data into a computational model of the bladder and the urethra during a cough and determine if the resulting model could be validated through comparison of predicted and measured vesical pressure. The results of this study indicated that simplified models can predict vesical pressures that differ by less than 5 cmH(2)O (pressure measurements. In addition, varying material properties had a minimal impact on the vesical pressure and displacements predicted by the model. The latter finding limits the use of vesical pressure as a validation criterion since different parameters can yield similar results in the same model. However, the insensitivity of vesical pressure predictions to material properties ensures that the outcome of our models is not highly sensitive to tissue material properties, which are not well characterized.
Robust Model Predictive Control of a Wind Turbine
DEFF Research Database (Denmark)
Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik
2012-01-01
In this work the problem of robust model predictive control (robust MPC) of a wind turbine in the full load region is considered. A minimax robust MPC approach is used to tackle the problem. Nonlinear dynamics of the wind turbine are derived by combining blade element momentum (BEM) theory...... and first principle modeling of the turbine flexible structure. Thereafter the nonlinear model is linearized using Taylor series expansion around system operating points. Operating points are determined by effective wind speed and an extended Kalman filter (EKF) is employed to estimate this. In addition...... of the uncertain system is employed and a norm-bounded uncertainty model is used to formulate a minimax model predictive control. The resulting optimization problem is simplified by semidefinite relaxation and the controller obtained is applied on a full complexity, high fidelity wind turbine model. Finally...
Predictive modeling by the cerebellum improves proprioception.
Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J
2013-09-04
Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.
Rutten, M.J.M.; Bovenhuis, H.; Arendonk, van J.A.M.
2010-01-01
Fourier transform infrared spectroscopy is a suitable method to determine bovine milk fat composition. However, the determination of fat composition by gas chromatography, required for calibration of the infrared prediction model, is expensive and labor intensive. It has recently been shown that the
A prediction model for Clostridium difficile recurrence
Directory of Open Access Journals (Sweden)
Francis D. LaBarbera
2015-02-01
Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.
Gamma-Ray Pulsars Models and Predictions
Harding, A K
2001-01-01
Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...
Artificial Neural Network Model for Predicting Compressive
Directory of Open Access Journals (Sweden)
Salim T. Yousif
2013-05-01
Full Text Available Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature. The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor affecting the output of the model. The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.
Modeling and Prediction of Krueger Device Noise
Guo, Yueping; Burley, Casey L.; Thomas, Russell H.
2016-01-01
This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.
A generative model for predicting terrorist incidents
Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger
2017-05-01
A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations
Predictability of Shanghai Stock Market by Agent-based Mix-game Model
Gou, C
2005-01-01
This paper reports the effort of using agent-based mix-game model to predict financial time series. It introduces the prediction methodology by means of mix-game model and gives an example of its application to forecasting Shanghai Index. The results show that this prediction methodology is effective and agent-based mix-game model is a potential good model to predict time series of financial markets.
Objective calibration of numerical weather prediction models
Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.
2017-07-01
Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.
Prediction models from CAD models of 3D objects
Camps, Octavia I.
1992-11-01
In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.
Model predictive control of MSMPR crystallizers
Moldoványi, Nóra; Lakatos, Béla G.; Szeifert, Ferenc
2005-02-01
A multi-input-multi-output (MIMO) control problem of isothermal continuous crystallizers is addressed in order to create an adequate model-based control system. The moment equation model of mixed suspension, mixed product removal (MSMPR) crystallizers that forms a dynamical system is used, the state of which is represented by the vector of six variables: the first four leading moments of the crystal size, solute concentration and solvent concentration. Hence, the time evolution of the system occurs in a bounded region of the six-dimensional phase space. The controlled variables are the mean size of the grain; the crystal size-distribution and the manipulated variables are the input concentration of the solute and the flow rate. The controllability and observability as well as the coupling between the inputs and the outputs was analyzed by simulation using the linearized model. It is shown that the crystallizer is a nonlinear MIMO system with strong coupling between the state variables. Considering the possibilities of the model reduction, a third-order model was found quite adequate for the model estimation in model predictive control (MPC). The mean crystal size and the variance of the size distribution can be nearly separately controlled by the residence time and the inlet solute concentration, respectively. By seeding, the controllability of the crystallizer increases significantly, and the overshoots and the oscillations become smaller. The results of the controlling study have shown that the linear MPC is an adaptable and feasible controller of continuous crystallizers.
Predictions of titanium alloy properties using thermodynamic modeling tools
Zhang, F.; Xie, F.-Y.; Chen, S.-L.; Chang, Y. A.; Furrer, D.; Venkatesh, V.
2005-12-01
Thermodynamic modeling tools have become essential in understanding the effect of alloy chemistry on the final microstructure of a material. Implementation of such tools to improve titanium processing via parameter optimization has resulted in significant cost savings through the elimination of shop/laboratory trials and tests. In this study, a thermodynamic modeling tool developed at CompuTherm, LLC, is being used to predict β transus, phase proportions, phase chemistries, partitioning coefficients, and phase boundaries of multicomponent titanium alloys. This modeling tool includes Pandat, software for multicomponent phase equilibrium calculations, and PanTitanium, a thermodynamic database for titanium alloys. Model predictions are compared with experimental results for one α-β alloy (Ti-64) and two near-β alloys (Ti-17 and Ti-10-2-3). The alloying elements, especially the interstitial elements O, N, H, and C, have been shown to have a significant effect on the β transus temperature, and are discussed in more detail herein.
An evaporation duct prediction model coupled with the MM5
Institute of Scientific and Technical Information of China (English)
JIAO Lin; ZHANG Yonggang
2015-01-01
Evaporation duct is an abnormal refractive phenomenon in the marine atmosphere boundary layer. It has been generally accepted that the evaporation duct prominently affects the performance of the electronic equipment over the sea because of its wide distribution and frequent occurrence. It has become a research focus of the navies all over the world. At present, the diagnostic models of the evaporation duct are all based on the Monin-Obukhov similarity theory, with only differences in the flux and character scale calculations in the surface layer. These models are applicable to the stationary and uniform open sea areas without considering the alongshore effect. This paper introduces the nonlinear factorav and the gust wind itemwg into the Babin model, and thus extends the evaporation duct diagnostic model to the offshore area under extremely low wind speed. In addition, an evaporation duct prediction model is designed and coupled with the fifth generation mesoscale model (MM5). The tower observational data and radar data at the Pingtan island of Fujian Province on May 25–26, 2002 were used to validate the forecast results. The outputs of the prediction model agree with the observations from 0 to 48 h. The relative error of the predicted evaporation duct height is 19.3% and the prediction results are consistent with the radar detection.
A Novel Trigger Model for Sales Prediction with Data Mining Techniques
Directory of Open Access Journals (Sweden)
Wenjie Huang
2015-05-01
Full Text Available Previous research on sales prediction has always used a single prediction model. However, no single model can perform the best for all kinds of merchandise. Accurate prediction results for just one commodity are meaningless to sellers. A general prediction for all commodities is needed. This paper illustrates a novel trigger system that can match certain kinds of commodities with a prediction model to give better prediction results for different kinds of commodities. We find some related factors for classification. Several classical prediction models are included as basic models for classification. We compared the results of the trigger model with other single models. The results show that the accuracy of the trigger model is better than that of a single model. This has implications for business in that sellers can utilize the proposed system to effectively predict the sales of several commodities.
Energy Technology Data Exchange (ETDEWEB)
Kaga, K.; Yamada, K.; Koto, S.; Ogushi, T. [Mitsubishi Electric Corp. Tokyo (Japan)
2000-07-25
In this paper, thermal network method using effective specific heat model of refrigerant with phase change is proposed for predicting the capacity of a plate fin and tube type heat exchanger. Effective specific heat model suits for obtaining an accurate result of a heat exchanging capacity of a condenser with small number of elements. By comparing calculated results with experiment, it is clarified that an error of calculated capacity of condenser is less than 1% in case that the range of sub-cool degree is from 15K to 22K at outlet of refrigerant flow. (author)
Simple predictions from multifield inflationary models.
Easther, Richard; Frazer, Jonathan; Peiris, Hiranya V; Price, Layne C
2014-04-25
We explore whether multifield inflationary models make unambiguous predictions for fundamental cosmological observables. Focusing on N-quadratic inflation, we numerically evaluate the full perturbation equations for models with 2, 3, and O(100) fields, using several distinct methods for specifying the initial values of the background fields. All scenarios are highly predictive, with the probability distribution functions of the cosmological observables becoming more sharply peaked as N increases. For N=100 fields, 95% of our Monte Carlo samples fall in the ranges ns∈(0.9455,0.9534), α∈(-9.741,-7.047)×10-4, r∈(0.1445,0.1449), and riso∈(0.02137,3.510)×10-3 for the spectral index, running, tensor-to-scalar ratio, and isocurvature-to-adiabatic ratio, respectively. The expected amplitude of isocurvature perturbations grows with N, raising the possibility that many-field models may be sensitive to postinflationary physics and suggesting new avenues for testing these scenarios.
Ben Yaghlene, H; Leguerinel, I; Hamdi, M; Mafart, P
2009-07-31
In this study, predictive microbiology and food engineering were combined in order to develop a new analytical model predicting the bacterial growth under dynamic temperature conditions. The proposed model associates a simplified primary bacterial growth model without lag, the secondary Ratkowsky "square root" model and a simplified two-parameter heat transfer model regarding an infinite slab. The model takes into consideration the product thickness, its thermal properties, the ambient air temperature, the convective heat transfer coefficient and the growth parameters of the micro organism of concern. For the validation of the overall model, five different combinations of ambient air temperature (ranging from 8 degrees C to 12 degrees C), product thickness (ranging from 1 cm to 6 cm) and convective heat transfer coefficient (ranging from 8 W/(m(2) K) to 60 W/(m(2) K)) were tested during a cooling procedure. Moreover, three different ambient air temperature scenarios assuming alternated cooling and heating stages, drawn from real refrigerated food processes, were tested. General agreement between predicted and observed bacterial growth was obtained and less than 5% of the experimental data fell outside the 95% confidence bands estimated by the bootstrap percentile method, at all the tested conditions. Accordingly, the overall model was successfully validated for isothermal and dynamic refrigeration cycles allowing for temperature dynamic changes at the centre and at the surface of the product. The major impact of the convective heat transfer coefficient and the product thickness on bacterial growth during the product cooling was demonstrated. For instance, the time needed for the same level of bacterial growth to be reached at the product's half thickness was estimated to be 5 and 16.5 h at low and high convection level, respectively. Moreover, simulation results demonstrated that the predicted bacterial growth at the air ambient temperature cannot be assumed to be
Predictions of models for environmental radiological assessment
Energy Technology Data Exchange (ETDEWEB)
Peres, Sueli da Silva; Lauria, Dejanira da Costa, E-mail: suelip@ird.gov.br, E-mail: dejanira@irg.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Servico de Avaliacao de Impacto Ambiental, Rio de Janeiro, RJ (Brazil); Mahler, Claudio Fernando [Coppe. Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa de Engenharia, Universidade Federal do Rio de Janeiro (UFRJ) - Programa de Engenharia Civil, RJ (Brazil)
2011-07-01
In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for {sup 137}Cs and {sup 60}Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)
Energy Technology Data Exchange (ETDEWEB)
Gillmann, Clarissa, E-mail: clarissa.gillmann@med.uni-heidelberg.de [Department of Radiation Oncology and Radiation Therapy, Heidelberg University Hospital, Heidelberg (Germany); Jäkel, Oliver [Department of Radiation Oncology and Radiation Therapy, Heidelberg University Hospital, Heidelberg (Germany); Heidelberg Ion Beam Therapy Center (HIT), Heidelberg (Germany); Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Heidelberg (Germany); Schlampp, Ingmar [Department of Radiation Oncology and Radiation Therapy, Heidelberg University Hospital, Heidelberg (Germany); Karger, Christian P. [Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Heidelberg (Germany)
2014-04-01
Purpose: To compare the relative biological effectiveness (RBE)–weighted tolerance doses for temporal lobe reactions after carbon ion radiation therapy using 2 different versions of the local effect model (LEM I vs LEM IV) for the same patient collective under identical conditions. Methods and Materials: In a previous study, 59 patients were investigated, of whom 10 experienced temporal lobe reactions (TLR) after carbon ion radiation therapy for low-grade skull-base chordoma and chondrosarcoma at Helmholtzzentrum für Schwerionenforschung (GSI) in Darmstadt, Germany in 2002 and 2003. TLR were detected as visible contrast enhancements on T1-weighted MRI images within a median follow-up time of 2.5 years. Although the derived RBE-weighted temporal lobe doses were based on the clinically applied LEM I, we have now recalculated the RBE-weighted dose distributions using LEM IV and derived dose-response curves with Dmax,V-1 cm³ (the RBE-weighted maximum dose in the remaining temporal lobe volume, excluding the volume of 1 cm³ with the highest dose) as an independent dosimetric variable. The resulting RBE-weighted tolerance doses were compared with those of the previous study to assess the clinical impact of LEM IV relative to LEM I. Results: The dose-response curve of LEM IV is shifted toward higher values compared to that of LEM I. The RBE-weighted tolerance dose for a 5% complication probability (TD{sub 5}) increases from 68.8 ± 3.3 to 78.3 ± 4.3 Gy (RBE) for LEM IV as compared to LEM I. Conclusions: LEM IV predicts a clinically significant increase of the RBE-weighted tolerance doses for the temporal lobe as compared to the currently applied LEM I. The limited available photon data do not allow a final conclusion as to whether RBE predictions of LEM I or LEM IV better fit better clinical experience in photon therapy. The decision about a future clinical application of LEM IV therefore requires additional analysis of temporal lobe reactions in a
Predicting Protein Secondary Structure with Markov Models
DEFF Research Database (Denmark)
Fischer, Paul; Larsen, Simon; Thomsen, Claus
2004-01-01
we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained......The primary structure of a protein is the sequence of its amino acids. The secondary structure describes structural properties of the molecule such as which parts of it form sheets, helices or coils. Spacial and other properties are described by the higher order structures. The classification task...
Hierarchical Model Predictive Control for Resource Distribution
DEFF Research Database (Denmark)
Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob
2010-01-01
This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....
Explicit model predictive control accuracy analysis
Knyazev, Andrew; Zhu, Peizhen; Di Cairano, Stefano
2015-01-01
Model Predictive Control (MPC) can efficiently control constrained systems in real-time applications. MPC feedback law for a linear system with linear inequality constraints can be explicitly computed off-line, which results in an off-line partition of the state space into non-overlapped convex regions, with affine control laws associated to each region of the partition. An actual implementation of this explicit MPC in low cost micro-controllers requires the data to be "quantized", i.e. repre...
The ARIC predictive model reliably predicted risk of type II diabetes in Asian populations
Directory of Open Access Journals (Sweden)
Chin Calvin
2012-04-01
Full Text Available Abstract Background Identification of high-risk individuals is crucial for effective implementation of type 2 diabetes mellitus prevention programs. Several studies have shown that multivariable predictive functions perform as well as the 2-hour post-challenge glucose in identifying these high-risk individuals. The performance of these functions in Asian populations, where the rise in prevalence of type 2 diabetes mellitus is expected to be the greatest in the next several decades, is relatively unknown. Methods Using data from three Asian populations in Singapore, we compared the performance of three multivariate predictive models in terms of their discriminatory power and calibration quality: the San Antonio Health Study model, Atherosclerosis Risk in Communities model and the Framingham model. Results The San Antonio Health Study and Atherosclerosis Risk in Communities models had better discriminative powers than using only fasting plasma glucose or the 2-hour post-challenge glucose. However, the Framingham model did not perform significantly better than fasting glucose or the 2-hour post-challenge glucose. All published models suffered from poor calibration. After recalibration, the Atherosclerosis Risk in Communities model achieved good calibration, the San Antonio Health Study model showed a significant lack of fit in females and the Framingham model showed a significant lack of fit in both females and males. Conclusions We conclude that adoption of the ARIC model for Asian populations is feasible and highly recommended when local prospective data is unavailable.
Critical conceptualism in environmental modeling and prediction.
Christakos, G
2003-10-15
Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river.
Directory of Open Access Journals (Sweden)
TM Shafey
2015-12-01
Full Text Available ABSTRACT The relationships between egg measurements [egg weight (EGWT, egg width (EGWD, egg shape index (EGSI, egg volume (EGV and egg density (EGD], and egg components [eggshell (SWT, yolk (YWT and albumen (AWT] were investigated in laying hens with 32, 45, and 59 weeks of age with an objective of managing multicollinearity (MC, using stepwise regression (SR and ridge regression (RR analyses. There were significant correlations among egg traits that led to MC problems in all eggs. Hen age influenced egg characteristics and the magnitude of the correlations among egg characteristics. Eggs produced at older age had significantly (p<0.01 higher EGWT, EGWD, EGV, YWT and AWT than those produced at younger age. The SR model alleviated MC problem in eggs produced at 32 weeks, with condition index greater than 30, and one predictor, EGWT had a model fit predicted egg components with R2 ranged from 60 to 99%. The SR model of eggs produced at 45 and 59 weeks indicated MC problem with variance inflation factors (VIF values greater than 10, and 4 predictors; EGWT, EGWD, EGV and EGD had a model fit that significantly predicted egg components with R2 % ranged from 76 to 99 %. The RR analysis provided lower VIF values than 10 and eliminated the MC problem for eggs produced at any age group. It is concluded that the RR analysis provided an ideal solution for managing the MC problem and successfully predicting egg components of laying hens from egg measurements.
Kassemi, Mohammad; Thompson, David
2016-09-01
An analytical Population Balance Equation model is developed and used to assess the risk of critical renal stone formation for astronauts during future space missions. The model uses the renal biochemical profile of the subject as input and predicts the steady-state size distribution of the nucleating, growing, and agglomerating calcium oxalate crystals during their transit through the kidney. The model is verified through comparison with published results of several crystallization experiments. Numerical results indicate that the model is successful in clearly distinguishing between 1-G normal and 1-G recurrent stone-former subjects based solely on their published 24-h urine biochemical profiles. Numerical case studies further show that the predicted renal calculi size distribution for a microgravity astronaut is closer to that of a recurrent stone former on Earth rather than to a normal subject in 1 G. This interestingly implies that the increase in renal stone risk level in microgravity is relatively more significant for a normal person than a stone former. However, numerical predictions still underscore that the stone-former subject carries by far the highest absolute risk of critical stone formation during space travel.
Predictive Capability Maturity Model for computational modeling and simulation.
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.
2007-10-01
The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.
Mathematical models for predicting indoor air quality from smoking activity.
Ott, W R
1999-05-01
Much progress has been made over four decades in developing, testing, and evaluating the performance of mathematical models for predicting pollutant concentrations from smoking in indoor settings. Although largely overlooked by the regulatory community, these models provide regulators and risk assessors with practical tools for quantitatively estimating the exposure level that people receive indoors for a given level of smoking activity. This article reviews the development of the mass balance model and its application to predicting indoor pollutant concentrations from cigarette smoke and derives the time-averaged version of the model from the basic laws of conservation of mass. A simple table is provided of computed respirable particulate concentrations for any indoor location for which the active smoking count, volume, and concentration decay rate (deposition rate combined with air exchange rate) are known. Using the indoor ventilatory air exchange rate causes slightly higher indoor concentrations and therefore errs on the side of protecting health, since it excludes particle deposition effects, whereas using the observed particle decay rate gives a more accurate prediction of indoor concentrations. This table permits easy comparisons of indoor concentrations with air quality guidelines and indoor standards for different combinations of active smoking counts and air exchange rates. The published literature on mathematical models of environmental tobacco smoke also is reviewed and indicates that these models generally give good agreement between predicted concentrations and actual indoor measurements.
A Predictive Maintenance Model for Railway Tracks
DEFF Research Database (Denmark)
Li, Rui; Wen, Min; Salling, Kim Bang
2015-01-01
For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euro per km per year [1]. Aiming to reduce such maintenance expenditure, this paper...... presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...
A predictive fitness model for influenza
Łuksza, Marta; Lässig, Michael
2014-03-01
The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...
Methods for Handling Missing Variables in Risk Prediction Models
Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.
2016-01-01
Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient
Comparing Sediment Yield Predictions from Different Hydrologic Modeling Schemes
Dahl, T. A.; Kendall, A. D.; Hyndman, D. W.
2015-12-01
Sediment yield, or the delivery of sediment from the landscape to a river, is a difficult process to accurately model. It is primarily a function of hydrology and climate, but influenced by landcover and the underlying soils. These additional factors make it much more difficult to accurately model than water flow alone. It is not intuitive what impact different hydrologic modeling schemes may have on the prediction of sediment yield. Here, two implementations of the Modified Universal Soil Loss Equation (MUSLE) are compared to examine the effects of hydrologic model choice. Both the Soil and Water Assessment Tool (SWAT) and the Landscape Hydrology Model (LHM) utilize the MUSLE for calculating sediment yield. SWAT is a lumped parameter hydrologic model developed by the USDA, which is commonly used for predicting sediment yield. LHM is a fully distributed hydrologic model developed primarily for integrated surface and groundwater studies at the watershed to regional scale. SWAT and LHM models were developed and tested for two large, adjacent watersheds in the Great Lakes region; the Maumee River and the St. Joseph River. The models were run using a variety of single model and ensemble downscaled climate change scenarios from the Coupled Model Intercomparison Project 5 (CMIP5). The initial results of this comparison are discussed here.
The predictive performance and stability of six species distribution models.
Duan, Ren-Yan; Kong, Xiao-Quan; Huang, Min-Yi; Fan, Wei-Yi; Wang, Zhi-Gao
2014-01-01
Predicting species' potential geographical range by species distribution models (SDMs) is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs. We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis) and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials). We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values. The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (pMAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM) had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points). According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.
The predictive performance and stability of six species distribution models.
Directory of Open Access Journals (Sweden)
Ren-Yan Duan
Full Text Available Predicting species' potential geographical range by species distribution models (SDMs is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs.We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials. We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values.The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (p<0.05, while the associated standard deviations and coefficients of variation were larger for BIOCLIM and DOMAIN trials (p<0.05, and the 99% confidence intervals for AUC and Kappa values were narrower for MAHAL, RF, MAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points.According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.
Large eddy simulation subgrid model for soot prediction
El-Asrag, Hossam Abd El-Raouf Mostafa
Soot prediction in realistic systems is one of the most challenging problems in theoretical and applied combustion. Soot formation as a chemical process is very complicated and not fully understood. The major difficulty stems from the chemical complexity of the soot formation process as well as its strong coupling with the other thermochemical and fluid processes that occur simultaneously. Soot is a major byproduct of incomplete combustion, having a strong impact on the environment as well as the combustion efficiency. Therefore, innovative methods is needed to predict soot in realistic configurations in an accurate and yet computationally efficient way. In the current study, a new soot formation subgrid model is developed and reported here. The new model is designed to be used within the context of the Large Eddy Simulation (LES) framework, combined with Linear Eddy Mixing (LEM) as a subgrid combustion model. The final model can be applied equally to premixed and non-premixed flames over any required geometry and flow conditions in the free, the transition, and the continuum regimes. The soot dynamics is predicted using a Method of Moments approach with Lagrangian Interpolative Closure (MOMIC) for the fractional moments. Since no prior knowledge of the particles distribution is required, the model is generally applicable. The current model accounts for the basic soot transport phenomena as transport by molecular diffusion and Thermophoretic forces. The model is first validated against experimental results for non-sooting swirling non-premixed and partially premixed flames. Next, a set of canonical premixed sooting flames are simulated, where the effect of turbulence, binary diffusivity and C/O ratio on soot formation are studied. Finally, the model is validated against a non-premixed jet sooting flame. The effect of the flame structure on the different soot formation stages as well as the particle size distribution is described. Good results are predicted with
Should we believe model predictions of future climate change? (Invited)
Knutti, R.
2009-12-01
for an effect to be real, but some features of the current models are perfectly robust yet known to be wrong. How much we can actually learn from more models of the same type is therefore an open question. A case is made here that the community must think harder on how to quantify the uncertainty and skill of their models, that making the models ever more complicated and expensive to run is unlikely to reduce uncertainties in predictions unless new data is used to constrain and calibrate the models, and that the demand for predictions and the data produced by the models is likely to quickly outgrow our capacity to understand the model and to analyze the results. More quantitative methods to quantify model performance are therefore critical to maximize the value of climate change projections from global climate models.
基于 BP 神经网络的调剖效果预测模型分析%Prediction model of conformance control effect based on BP neural network
Institute of Scientific and Technical Information of China (English)
刘宁; 刘士梦; 李明
2014-01-01
产油量预测是调剖方案实施以后效果预测或评价的关键，基于BP神经网络理论，通过分析影响调剖效果的因素，利用Matlab神经网络工具箱函数，建立了调剖神经网络预测模型，经过模型预测效果分析及实际运用，认为利用BP神经网络预测产油量与实际值较为吻合，误差相对较小，可靠性高，可运用此模型预测调剖产油量。%The oil production prediction is a key to prediction or evaluation of conformance control effect after the scheme implementation .Based on the theory of BP neural network ,by analyzing the influencing factors of conformance con-trol effect ,the neural network prediction model for conformance control was established by employing the toolbox functions in the Matlab neural network .Through the analysis of the model forecast effect and practical application ,it was considered that the oil production predicted by the BP neural network was consistent with the actual one ,which had smaller relative error and high reliable .The model can be used to predict the conformance control production .
Mathematical modelling methodologies in predictive food microbiology: a SWOT analysis.
Ferrer, Jordi; Prats, Clara; López, Daniel; Vives-Rego, Josep
2009-08-31
Predictive microbiology is the area of food microbiology that attempts to forecast the quantitative evolution of microbial populations over time. This is achieved to a great extent through models that include the mechanisms governing population dynamics. Traditionally, the models used in predictive microbiology are whole-system continuous models that describe population dynamics by means of equations applied to extensive or averaged variables of the whole system. Many existing models can be classified by specific criteria. We can distinguish between survival and growth models by seeing whether they tackle mortality or cell duplication. We can distinguish between empirical (phenomenological) models, which mathematically describe specific behaviour, and theoretical (mechanistic) models with a biological basis, which search for the underlying mechanisms driving already observed phenomena. We can also distinguish between primary, secondary and tertiary models, by examining their treatment of the effects of external factors and constraints on the microbial community. Recently, the use of spatially explicit Individual-based Models (IbMs) has spread through predictive microbiology, due to the current technological capacity of performing measurements on single individual cells and thanks to the consolidation of computational modelling. Spatially explicit IbMs are bottom-up approaches to microbial communities that build bridges between the description of micro-organisms at the cell level and macroscopic observations at the population level. They provide greater insight into the mesoscale phenomena that link unicellular and population levels. Every model is built in response to a particular question and with different aims. Even so, in this research we conducted a SWOT (Strength, Weaknesses, Opportunities and Threats) analysis of the different approaches (population continuous modelling and Individual-based Modelling), which we hope will be helpful for current and future
A COMPACT MODEL FOR PREDICTING ROAD TRAFFIC NOISE
Directory of Open Access Journals (Sweden)
R. Golmohammadi ، M. Abbaspour ، P. Nassiri ، H. Mahjub
2009-07-01
Full Text Available Noise is one of the most important sources of pollution in the metropolitan areas. The recognition of road traffic noise as one of the main sources of environmental pollution has led to develop models that enable us to predict noise level from fundamental variables. Traffic noise prediction models are required as aids in the design of roads and sometimes in the assessment of existing, or envisaged changes in, traffic noise conditions. The purpose of this study was to design a prediction road traffic noise model from traffic variables and conditions of transportation in Iran.This paper is the result of a research conducted in the city of Hamadan with the ultimate objective of setting up a traffic noise model based on the traffic conditions of Iranian cities. Noise levels and other variables have been measured in 282 samples to develop a statistical regression model based on A-weighted equivalent noise level for Iranian road condition. The results revealed that the average LAeq in all stations was 69.04± 4.25 dB(A, the average speed of vehicles was 44.57±11.46 km/h and average traffic load was 1231.9 ± 910.2 V/h.The developed model has seven explanatory entrance variables in order to achieve a high regression coefficient (R2=0.901. Comparing means of predicted and measuring equivalent sound pressure level (LAeq showed small difference less than -0.42 dB(A and -0.77 dB(A for Tehran and Hamadan cities, respectively. The suggested road traffic noise model can be effectively used as a decision support tool for predicting equivalent sound pressure level index in the cities of Iran.
Xu, Yifang; Collins, Leslie M
2005-06-01
This work investigates dynamic range and intensity discrimination for electrical pulse-train stimuli that are modulated by noise using a stochastic auditory nerve model. Based on a hypothesized monotonic relationship between loudness and the number of spikes elicited by a stimulus, theoretical prediction of the uncomfortable level has previously been determined by comparing spike counts to a fixed threshold, Nucl. However, no specific rule for determining Nucl has been suggested. Our work determines the uncomfortable level based on the excitation pattern of the neural response in a normal ear. The number of fibers corresponding to the portion of the basilar membrane driven by a stimulus at an uncomfortable level in a normal ear is related to Nucl at an uncomfortable level of the electrical stimulus. Intensity discrimination limens are predicted using signal detection theory via the probability mass function of the neural response and via experimental simulations. The results show that the uncomfortable level for pulse-train stimuli increases slightly as noise level increases. Combining this with our previous threshold predictions, we hypothesize that the dynamic range for noise-modulated pulse-train stimuli should increase with additive noise. However, since our predictions indicate that intensity discrimination under noise degrades, overall intensity coding performance may not improve significantly.
P.R. Saxena (Pramod Ranjan); P.A.M. de Vries (Peter); W. Wang (Wei); J.P. Heiligers (Jan); A. Maassen VanDenBrink (Antoinette); W.A. Bax (Willem); F.D. Yocca (Frank)
1997-01-01
markdownabstractAbstract Several acutely acting antimigraine drugs, including ergotamine and sumatriptan, have the ability to constrict porcine arteriovenous anastomoses as well as the human isolated coronary artery. These two experimental models seem to serve as indicators, respectively, for the
Estimating the magnitude of prediction uncertainties for the APLE model
Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...
Predictive modeling of low solubility semiconductor alloys
Rodriguez, Garrett V.; Millunchick, Joanna M.
2016-09-01
GaAsBi is of great interest for applications in high efficiency optoelectronic devices due to its highly tunable bandgap. However, the experimental growth of high Bi content films has proven difficult. Here, we model GaAsBi film growth using a kinetic Monte Carlo simulation that explicitly takes cation and anion reactions into account. The unique behavior of Bi droplets is explored, and a sharp decrease in Bi content upon Bi droplet formation is demonstrated. The high mobility of simulated Bi droplets on GaAsBi surfaces is shown to produce phase separated Ga-Bi droplets as well as depressions on the film surface. A phase diagram for a range of growth rates that predicts both Bi content and droplet formation is presented to guide the experimental growth of high Bi content GaAsBi films.
Distributed model predictive control made easy
Negenborn, Rudy
2014-01-01
The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems. This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...
Leptogenesis in minimal predictive seesaw models
Björkeroth, Fredrik; de Anda, Francisco J.; de Medeiros Varzielas, Ivo; King, Stephen F.
2015-10-01
We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to ( ν e , ν μ , ν τ ) proportional to (0, 1, 1) and (1, n, n - 2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A 4 vacuum alignment provides the required Yukawa structures with n = 3, while a {{Z}}_9 symmetry fixes the relatives phase to be a ninth root of unity.
Hidden Semi-Markov Models for Predictive Maintenance
Directory of Open Access Journals (Sweden)
Francesco Cartella
2015-01-01
Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.
Frequency weighted model predictive control of wind turbine
DEFF Research Database (Denmark)
Klauco, Martin; Poulsen, Niels Kjølstad; Mirzaei, Mahmood;
2013-01-01
This work is focused on applying frequency weighted model predictive control (FMPC) on three blade horizontal axis wind turbine (HAWT). A wind turbine is a very complex, non-linear system influenced by a stochastic wind speed variation. The reduced dynamics considered in this work...... are the rotational degree of freedom of the rotor and the tower for-aft movement. The MPC design is based on a receding horizon policy and a linearised model of the wind turbine. Due to the change of dynamics according to wind speed, several linearisation points must be considered and the control design adjusted...... accordingly. In practice is very hard to measure the effective wind speed, this quantity will be estimated using measurements from the turbine itself. For this purpose stationary predictive Kalman filter has been used. Stochastic simulations of the wind turbine behaviour with applied frequency weighted model...
Adaptive quality prediction of batch processes based on PLS model
Institute of Scientific and Technical Information of China (English)
LI Chun-fu; ZHANG Jie; WANG Gui-zeng
2006-01-01
There are usually no on-line product quality measurements in batch and semi-batch processes,which make the process control task very difficult.In this paper,a model for predicting the end-product quality from the available on-line process variables at the early stage of a batch is developed using partial least squares (PLS)method.Furthermore,some available mid-course quality measurements are used to rectify the final prediction results.To deal with the problem that the process may change with time,recursive PLS (RPLS) algorithm is used to update the model based on the new batch data and the old model parameters after each batch.An application to a simulated batch MMA polymerization process demonstrates the effectiveness of the proposed method.
Model Predictive Control of Wind Turbines using Uncertain LIDAR Measurements
DEFF Research Database (Denmark)
Mirzaei, Mahmood; Soltani, Mohsen; Poulsen, Niels Kjølstad
2013-01-01
The problem of Model predictive control (MPC) of wind turbines using uncertain LIDAR (LIght Detection And Ranging) measurements is considered. A nonlinear dynamical model of the wind turbine is obtained. We linearize the obtained nonlinear model for different operating points, which are determined...... by the effective wind speed on the rotor disc. We take the wind speed as a scheduling variable. The wind speed is measurable ahead of the turbine using LIDARs, therefore, the scheduling variable is known for the entire prediction horizon. By taking the advantage of having future values of the scheduling variable...... on wind speed estimation and measurements from the LIDAR is devised to find an estimate of the delay and compensate for it before it is used in the controller. Comparisons between the MPC with error compensation, the MPC without error compensation and an MPC with re-linearization at each sample point...
The effect of genealogy-based haplotypes on genomic prediction
DEFF Research Database (Denmark)
Edriss, Vahid; Fernando, Rohan L.; Su, Guosheng
2013-01-01
Background Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression...... on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. Methods A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using...... local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (pi) of the haplotype covariates had zero effect...
Regression Model to Predict Global Solar Irradiance in Malaysia
Directory of Open Access Journals (Sweden)
Hairuniza Ahmed Kutty
2015-01-01
Full Text Available A novel regression model is developed to estimate the monthly global solar irradiance in Malaysia. The model is developed based on different available meteorological parameters, including temperature, cloud cover, rain precipitate, relative humidity, wind speed, pressure, and gust speed, by implementing regression analysis. This paper reports on the details of the analysis of the effect of each prediction parameter to identify the parameters that are relevant to estimating global solar irradiance. In addition, the proposed model is compared in terms of the root mean square error (RMSE, mean bias error (MBE, and the coefficient of determination (R2 with other models available from literature studies. Seven models based on single parameters (PM1 to PM7 and five multiple-parameter models (PM7 to PM12 are proposed. The new models perform well, with RMSE ranging from 0.429% to 1.774%, R2 ranging from 0.942 to 0.992, and MBE ranging from −0.1571% to 0.6025%. In general, cloud cover significantly affects the estimation of global solar irradiance. However, cloud cover in Malaysia lacks sufficient influence when included into multiple-parameter models although it performs fairly well in single-parameter prediction models.
State Predictive Model Following Control System for Linear Time Delays
Institute of Scientific and Technical Information of China (English)
Da-Zhong Wang; Shu-Jing Wu; Shigenori Okubo
2009-01-01
In this paper, we propose a new state predictive model following control system (MFCS). The considered system has linear time delays. With the MFCS method, we obtain a simple input control law. The bounded property of the internal states for the control is given and the utility of this control design is guaranteed. Finally, an example is given to illustrate the effectiveness of the proposed method.
More relaxed conditions for model predictive control with guaranteed stability
Institute of Scientific and Technical Information of China (English)
Bin LIU; Yugeng XI
2005-01-01
For the model predictive controller,terminal state satisfying a certain inequality can guarantee the stability but it is somewhat conservative.In this paper,we give a more relaxed stability condition by considering the effect of the initial state.Based on that we propose an algorithm to guarantee that the closed loop system is asymptotically stable.Finally,the conclusions are verified by a simulation.
Remaining Useful Lifetime (RUL - Probabilistic Predictive Model
Directory of Open Access Journals (Sweden)
Ephraim Suhir
2011-01-01
Full Text Available Reliability evaluations and assurances cannot be delayed until the device (system is fabricated and put into operation. Reliability of an electronic product should be conceived at the early stages of its design; implemented during manufacturing; evaluated (considering customer requirements and the existing specifications, by electrical, optical and mechanical measurements and testing; checked (screened during manufacturing (fabrication; and, if necessary and appropriate, maintained in the field during the product’s operation Simple and physically meaningful probabilistic predictive model is suggested for the evaluation of the remaining useful lifetime (RUL of an electronic device (system after an appreciable deviation from its normal operation conditions has been detected, and the increase in the failure rate and the change in the configuration of the wear-out portion of the bathtub has been assessed. The general concepts are illustrated by numerical examples. The model can be employed, along with other PHM forecasting and interfering tools and means, to evaluate and to maintain the high level of the reliability (probability of non-failure of a device (system at the operation stage of its lifetime.
COGNITIVE MODELS OF PREDICTION THE DEVELOPMENT OF A DIVERSIFIED CORPORATION
Directory of Open Access Journals (Sweden)
Baranovskaya T. P.
2016-10-01
Full Text Available The application of classical forecasting methods applied to a diversified corporation faces some certain difficulties, due to its economic nature. Unlike other businesses, diversified corporations are characterized by multidimensional arrays of data with a high degree of distortion and fragmentation of information due to the cumulative effect of the incompleteness and distortion of accounting information from the enterprises in it. Under these conditions, the applied methods and tools must have high resolution and work effectively with large databases with incomplete information, ensure the correct common comparable quantitative processing of the heterogeneous nature of the factors measured in different units. It is therefore necessary to select or develop some methods that can work with complex poorly formalized tasks. This fact substantiates the relevance of the problem of developing models, methods and tools for solving the problem of forecasting the development of diversified corporations. This is the subject of this work, which makes it relevant. The work aims to: 1 analyze the forecasting methods to justify the choice of system-cognitive analysis as one of the effective methods for the prediction of semi-structured tasks; 2 to adapt and develop the method of systemic-cognitive analysis for forecasting of dynamics of development of the corporation subject to the scenario approach; 3 to develop predictive model scenarios of changes in basic economic indicators of development of the corporation and to assess their credibility; 4 determine the analytical form of the dependence between past and future scenarios of various economic indicators; 5 develop analytical models weighing predictable scenarios, taking into account all prediction results with positive levels of similarity, to increase the level of reliability of forecasts; 6 to develop a calculation procedure to assess the strength of influence on the corporation (sensitivity of its
Bayesian prediction of placebo analgesia in an instrumental learning model
Jung, Won-Mo; Lee, Ye-Seul; Wallraven, Christian; Chae, Younbyoung
2017-01-01
Placebo analgesia can be primarily explained by the Pavlovian conditioning paradigm in which a passively applied cue becomes associated with less pain. In contrast, instrumental conditioning employs an active paradigm that might be more similar to clinical settings. In the present study, an instrumental conditioning paradigm involving a modified trust game in a simulated clinical situation was used to induce placebo analgesia. Additionally, Bayesian modeling was applied to predict the placebo responses of individuals based on their choices. Twenty-four participants engaged in a medical trust game in which decisions to receive treatment from either a doctor (more effective with high cost) or a pharmacy (less effective with low cost) were made after receiving a reference pain stimulus. In the conditioning session, the participants received lower levels of pain following both choices, while high pain stimuli were administered in the test session even after making the decision. The choice-dependent pain in the conditioning session was modulated in terms of both intensity and uncertainty. Participants reported significantly less pain when they chose the doctor or the pharmacy for treatment compared to the control trials. The predicted pain ratings based on Bayesian modeling showed significant correlations with the actual reports from participants for both of the choice categories. The instrumental conditioning paradigm allowed for the active choice of optional cues and was able to induce the placebo analgesia effect. Additionally, Bayesian modeling successfully predicted pain ratings in a simulated clinical situation that fits well with placebo analgesia induced by instrumental conditioning. PMID:28225816
Validating predictions from climate envelope models
Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.
2013-01-01
Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.
DEFF Research Database (Denmark)
Morin, Léo; Leblond, Jean Baptiste; Tvergaard, Viggo
2016-01-01
An extension of Gurson's famous model (Gurson, 1977) of porous plastic solids, incorporating void shape effects, has recently been proposed by Madou and Leblond (Madou and Leblond, 2012a, 2012b, 2013; Madou et al., 2013). In this extension the voids are no longer modelled as spherical but ellipso......An extension of Gurson's famous model (Gurson, 1977) of porous plastic solids, incorporating void shape effects, has recently been proposed by Madou and Leblond (Madou and Leblond, 2012a, 2012b, 2013; Madou et al., 2013). In this extension the voids are no longer modelled as spherical...... and coworkers (Tvergaard, 2008, 2009, 2012, 2015a; Dahl et al., 2012; Nielsen et al., 2012) involving the shear loading of elementary porous cells, where softening due to changes of the void shape and orientation was very apparent. It is found that with a simple, heuristic modelling of the phenomenon...
Hydrologic predictions in a changing environment: behavioral modeling
Directory of Open Access Journals (Sweden)
B. Schaefli
2010-10-01
Full Text Available Most hydrological models are valid at most only in a few places and cannot be reasonably transferred to other places or to far distant time periods. Transfer in space is difficult because the models are conditioned on past observations at particular places to define parameter values and unobservable processes that are needed to fully characterize the structure and functioning of the landscape. Transfer in time has to deal with the likely temporal changes to both parameters and processes under future changed conditions. This remains an important obstacle to addressing some of the most urgent prediction questions in hydrology, such as prediction in ungauged basins and prediction under global change. In this paper, we propose a new approach to catchment hydrological modeling, based on universal principles that do not change in time and that remain valid across many places. The key to this framework, which we call behavioral modeling, is to assume that these universal and time-invariant organizing principles can be used to identify the most appropriate model structure (including parameter values and responses for a given ecosystem at a given moment in time. The organizing principles may be derived from fundamental physical or biological laws, or from empirical laws that have been demonstrated to be time-invariant and to hold at many places and scales. Much fundamental research remains to be undertaken to help discover these organizing principles on the basis of exploration of observed patterns of landscape structure and hydrological behavior and their interpretation as legacy effects of past co-evolution of climate, soils, topography, vegetation and humans. Our hope is that the new behavioral modeling framework will be a step forward towards a new vision for hydrology where models are capable of more confidently predicting the behavior of catchments beyond what has been observed or experienced before.
HESS Opinions: Hydrologic predictions in a changing environment: behavioral modeling
Directory of Open Access Journals (Sweden)
S. J. Schymanski
2011-02-01
Full Text Available Most hydrological models are valid at most only in a few places and cannot be reasonably transferred to other places or to far distant time periods. Transfer in space is difficult because the models are conditioned on past observations at particular places to define parameter values and unobservable processes that are needed to fully characterize the structure and functioning of the landscape. Transfer in time has to deal with the likely temporal changes to both parameters and processes under future changed conditions. This remains an important obstacle to addressing some of the most urgent prediction questions in hydrology, such as prediction in ungauged basins and prediction under global change. In this paper, we propose a new approach to catchment hydrological modeling, based on universal principles that do not change in time and that remain valid across many places. The key to this framework, which we call behavioral modeling, is to assume that there are universal and time-invariant organizing principles that can be used to identify the most appropriate model structure (including parameter values and responses for a given ecosystem at a given moment in time. These organizing principles may be derived from fundamental physical or biological laws, or from empirical laws that have been demonstrated to be time-invariant and to hold at many places and scales. Much fundamental research remains to be undertaken to help discover these organizing principles on the basis of exploration of observed patterns of landscape structure and hydrological behavior and their interpretation as legacy effects of past co-evolution of climate, soils, topography, vegetation and humans. Our hope is that the new behavioral modeling framework will be a step forward towards a new vision for hydrology where models are capable of more confidently predicting the behavior of catchments beyond what has been observed or experienced before.
Improved Prediction of the Doppler Effect in TRISO Fuel
Energy Technology Data Exchange (ETDEWEB)
J. Ortensi; A.M. Ougouag
2009-05-01
The Doppler feedback mechanism is a major contributor to the passive safety of gas-cooled, graphite-moderated High Temperature Reactors that use fuel based on TRISO particles. It follows that the correct prediction of the magnitude and time-dependence of this feedback effect is essential to the conduct of safety analyses for these reactors. Since the effect is directly dependent on the actual temperature reached by the fuel during transients, the underlying phenomena of heat transfer and temperature rise must be correctly predicted. This paper presents an improved model for the TRISO particle and its thermal behavior during transients. The improved approach incorporates an explicit TRISO heat conduction model to better quantify the time dependence of the temperature in the various layers of the TRISO particle, including its fuel central zone. There follows a better treatment of the Doppler Effect within said fuel zone. The new model is based on a 1-D analytic solution for composite media using the Green’s function technique. The modeling improvement takes advantage of some of the physical behavior of TRISO fuel under irradiation and includes a distinctive look at the physics of the neutronic Doppler Effect. The new methodology has been implemented within the coupled R-Z nodal diffusion code CYNOD-THERMIX. The new model has been applied to the analysis of earthquakes (presented in a companion paper). In this paper, the model is applied to the control rod ejection event, as specified in the OECD PBMR-400 benchmark, but with temperature dependent thermal properties. The results obtained for this transient using the enhanced code are a considerable improvement over the predictions of the original code. The incorporation of the enhanced model shows that the Doppler Effect plays a more significant role than predicted by the original unenhanced model based on the THERMIX homogenized fuel region model. The new model shows that the overall energy generation during the rod
An infinitesimal model for quantitative trait genomic value prediction.
Directory of Open Access Journals (Sweden)
Zhiqiu Hu
Full Text Available We developed a marker based infinitesimal model for quantitative trait analysis. In contrast to the classical infinitesimal model, we now have new information about the segregation of every individual locus of the entire genome. Under this new model, we propose that the genetic effect of an individual locus is a function of the genome location (a continuous quantity. The overall genetic value of an individual is the weighted integral of the genetic effect function along the genome. Numerical integration is performed to find the integral, which requires partitioning the entire genome into a finite number of bins. Each bin may contain many markers. The integral is approximated by the weighted sum of all the bin effects. We now turn the problem of marker analysis into bin analysis so that the model dimension has decreased from a virtual infinity to a finite number of bins. This new approach can efficiently handle virtually unlimited number of markers without marker selection. The marker based infinitesimal model requires high linkage disequilibrium of all markers within a bin. For populations with low or no linkage disequilibrium, we develop an adaptive infinitesimal model. Both the original and the adaptive models are tested using simulated data as well as beef cattle data. The simulated data analysis shows that there is always an optimal number of bins at which the predictability of the bin model is much greater than the original marker analysis. Result of the beef cattle data analysis indicates that the bin model can increase the predictability from 10% (multiple marker analysis to 33% (multiple bin analysis. The marker based infinitesimal model paves a way towards the solution of genetic mapping and genomic selection using the whole genome sequence data.
Modelling Chemical Reasoning to Predict and Invent Reactions.
Segler, Marwin H S; Waller, Mark P
2016-11-11
The ability to reason beyond established knowledge allows organic chemists to solve synthetic problems and invent novel transformations. Herein, we propose a model that mimics chemical reasoning, and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180 000 randomly selected binary reactions. The data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-)discovering novel transformations (even including transition metal-catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph and because each single reaction prediction is typically achieved in a sub-second time frame, the model can be used as a high-throughput generator of reaction hypotheses for reaction discovery.
Modeling and predicting historical volatility in exchange rate markets
Lahmiri, Salim
2017-04-01
Volatility modeling and forecasting of currency exchange rate is an important task in several business risk management tasks; including treasury risk management, derivatives pricing, and portfolio risk evaluation. The purpose of this study is to present a simple and effective approach for predicting historical volatility of currency exchange rate. The approach is based on a limited set of technical indicators as inputs to the artificial neural networks (ANN). To show the effectiveness of the proposed approach, it was applied to forecast US/Canada and US/Euro exchange rates volatilities. The forecasting results show that our simple approach outperformed the conventional GARCH and EGARCH with different distribution assumptions, and also the hybrid GARCH and EGARCH with ANN in terms of mean absolute error, mean of squared errors, and Theil's inequality coefficient. Because of the simplicity and effectiveness of the approach, it is promising for US currency volatility prediction tasks.
Directory of Open Access Journals (Sweden)
Jing Lu
2014-11-01
Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.
RFI modeling and prediction approach for SATOP applications: RFI prediction models
Nguyen, Tien M.; Tran, Hien T.; Wang, Zhonghai; Coons, Amanda; Nguyen, Charles C.; Lane, Steven A.; Pham, Khanh D.; Chen, Genshe; Wang, Gang
2016-05-01
This paper describes a technical approach for the development of RFI prediction models using carrier synchronization loop when calculating Bit or Carrier SNR degradation due to interferences for (i) detecting narrow-band and wideband RFI signals, and (ii) estimating and predicting the behavior of the RFI signals. The paper presents analytical and simulation models and provides both analytical and simulation results on the performance of USB (Unified S-Band) waveforms in the presence of narrow-band and wideband RFI signals. The models presented in this paper will allow the future USB command systems to detect the RFI presence, estimate the RFI characteristics and predict the RFI behavior in real-time for accurate assessment of the impacts of RFI on the command Bit Error Rate (BER) performance. The command BER degradation model presented in this paper also allows the ground system operator to estimate the optimum transmitted SNR to maintain a required command BER level in the presence of both friendly and un-friendly RFI sources.
Prediction models : the right tool for the right problem
Kappen, Teus H.; Peelen, Linda M.
2016-01-01
PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to unders
Joint effect of multiple common SNPs predicts melanoma susceptibility.
Directory of Open Access Journals (Sweden)
Shenying Fang
Full Text Available Single genetic variants discovered so far have been only weakly associated with melanoma. This study aims to use multiple single nucleotide polymorphisms (SNPs jointly to obtain a larger genetic effect and to improve the predictive value of a conventional phenotypic model. We analyzed 11 SNPs that were associated with melanoma risk in previous studies and were genotyped in MD Anderson Cancer Center (MDACC and Harvard Medical School investigations. Participants with ≥15 risk alleles were 5-fold more likely to have melanoma compared to those carrying ≤6. Compared to a model using the most significant single variant rs12913832, the increase in predictive value for the model using a polygenic risk score (PRS comprised of 11 SNPs was 0.07(95% CI, 0.05-0.07. The overall predictive value of the PRS together with conventional phenotypic factors in the MDACC population was 0.69 (95% CI, 0.64-0.69. PRS significantly improved the risk prediction and reclassification in melanoma as compared with the conventional model. Our study suggests that a polygenic profile can improve the predictive value of an individual gene polymorphism and may be able to significantly improve the predictive value beyond conventional phenotypic melanoma risk factors.
Predictability of the Indian Ocean Dipole in the coupled models
Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao
2017-03-01
In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.
Effects of Weak Ties on Epidemic Predictability in Community Networks
Shu, Panpan; Gong, Kai; Liu, Ying
2012-01-01
Weak ties play a significant role in the structures and the dynamics of community networks. Based on the susceptible-infected model in contact process, we study numerically how weak ties influence the predictability of epidemic dynamics. We first investigate the effects of different kinds of weak ties on the variabilities of both the arrival time and the prevalence of disease, and find that the bridgeness with small degree can enhance the predictability of epidemic spreading. Once weak ties are settled, compared with the variability of arrival time, the variability of prevalence displays a diametrically opposed changing trend with both the distance of the initial seed to the bridgeness and the degree of the initial seed. More specifically, the further distance and the larger degree of the initial seed can induce the better predictability of arrival time and the worse predictability of prevalence. Moreover, we discuss the effects of weak tie number on the epidemic variability. As community strength becomes ver...
Nonconvex model predictive control for commercial refrigeration
Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John
2013-08-01
We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.
Leptogenesis in minimal predictive seesaw models
Energy Technology Data Exchange (ETDEWEB)
Björkeroth, Fredrik [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom); Anda, Francisco J. de [Departamento de Física, CUCEI, Universidad de Guadalajara,Guadalajara (Mexico); Varzielas, Ivo de Medeiros; King, Stephen F. [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom)
2015-10-15
We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the “atmospheric” and “solar” neutrino masses with Yukawa couplings to (ν{sub e},ν{sub μ},ν{sub τ}) proportional to (0,1,1) and (1,n,n−2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A{sub 4} vacuum alignment provides the required Yukawa structures with n=3, while a ℤ{sub 9} symmetry fixes the relatives phase to be a ninth root of unity.
QSPR Models for Octane Number Prediction
Directory of Open Access Journals (Sweden)
Jabir H. Al-Fahemi
2014-01-01
Full Text Available Quantitative structure-property relationship (QSPR is performed as a means to predict octane number of hydrocarbons via correlating properties to parameters calculated from molecular structure; such parameters are molecular mass M, hydration energy EH, boiling point BP, octanol/water distribution coefficient logP, molar refractivity MR, critical pressure CP, critical volume CV, and critical temperature CT. Principal component analysis (PCA and multiple linear regression technique (MLR were performed to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The results of PCA explain the interrelationships between octane number and different variables. Correlation coefficients were calculated using M.S. Excel to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The data set was split into training of 40 hydrocarbons and validation set of 25 hydrocarbons. The linear relationship between the selected descriptors and the octane number has coefficient of determination (R2=0.932, statistical significance (F=53.21, and standard errors (s =7.7. The obtained QSPR model was applied on the validation set of octane number for hydrocarbons giving RCV2=0.942 and s=6.328.
Predictive models for population performance on real biological fitness landscapes.
Rowe, William; Wedge, David C; Platt, Mark; Kell, Douglas B; Knowles, Joshua
2010-09-01
Directed evolution, in addition to its principal application of obtaining novel biomolecules, offers significant potential as a vehicle for obtaining useful information about the topologies of biomolecular fitness landscapes. In this article, we make use of a special type of model of fitness landscapes-based on finite state machines-which can be inferred from directed evolution experiments. Importantly, the model is constructed only from the fitness data and phylogeny, not sequence or structural information, which is often absent. The model, called a landscape state machine (LSM), has already been used successfully in the evolutionary computation literature to model the landscapes of artificial optimization problems. Here, we use the method for the first time to simulate a biological fitness landscape based on experimental evaluation. We demonstrate in this study that LSMs are capable not only of representing the structure of model fitness landscapes such as NK-landscapes, but also the fitness landscape of real DNA oligomers binding to a protein (allophycocyanin), data we derived from experimental evaluations on microarrays. The LSMs prove adept at modelling the progress of evolution as a function of various controlling parameters, as validated by evaluations on the real landscapes. Specifically, the ability of the model to 'predict' optimal mutation rates and other parameters of the evolution is demonstrated. A modification to the standard LSM also proves accurate at predicting the effects of recombination on the evolution.
The Prediction Model of Dam Uplift Pressure Based on Random Forest
Li, Xing; Su, Huaizhi; Hu, Jiang
2017-09-01
The prediction of the dam uplift pressure is of great significance in the dam safety monitoring. Based on the comprehensive consideration of various factors, 18 parameters are selected as the main factors affecting the prediction of uplift pressure, use the actual monitoring data of uplift pressure as the evaluation factors for the prediction model, based on the random forest algorithm and support vector machine to build the dam uplift pressure prediction model to predict the uplift pressure of the dam, and the predict performance of the two models were compared and analyzed. At the same time, based on the established random forest prediction model, the significance of each factor is analyzed, and the importance of each factor of the prediction model is calculated by the importance function. Results showed that: (1) RF prediction model can quickly and accurately predict the uplift pressure value according to the influence factors, the average prediction accuracy is above 96%, compared with the support vector machine (SVM) model, random forest model has better robustness, better prediction precision and faster convergence speed, and the random forest model is more robust to missing data and unbalanced data. (2) The effect of water level on uplift pressure is the largest, and the influence of rainfall on the uplift pressure is the smallest compared with other factors.
Kriging with mixed effects models
Directory of Open Access Journals (Sweden)
Alessio Pollice
2007-10-01
Full Text Available In this paper the effectiveness of the use of mixed effects models for estimation and prediction purposes in spatial statistics for continuous data is reviewed in the classical and Bayesian frameworks. A case study on agricultural data is also provided.
Optimality principles for model-based prediction of human gait.
Ackermann, Marko; van den Bogert, Antonie J
2010-04-19
Although humans have a large repertoire of potential movements, gait patterns tend to be stereotypical and appear to be selected according to optimality principles such as minimal energy. When applied to dynamic musculoskeletal models such optimality principles might be used to predict how a patient's gait adapts to mechanical interventions such as prosthetic devices or surgery. In this paper we study the effects of different performance criteria on predicted gait patterns using a 2D musculoskeletal model. The associated optimal control problem for a family of different cost functions was solved utilizing the direct collocation method. It was found that fatigue-like cost functions produced realistic gait, with stance phase knee flexion, as opposed to energy-related cost functions which avoided knee flexion during the stance phase. We conclude that fatigue minimization may be one of the primary optimality principles governing human gait.
Energy Technology Data Exchange (ETDEWEB)
Archbold, T.F.; Bower, R.B.; Polonis, D.H.
1982-04-01
The 1977 version of the Simpson-Puls-Dutton model appears to be the most amenable with respect to utilizing known or readily estimated quantities. The Pardee-Paton model requires extensive calculations involving estimated quantities. Recent observations by Koike and Suzuki on vanadium support the general assumption that crack growth in hydride forming metals is determined by the rate of hydride formation, and their hydrogen atmosphere-displacive transformation model is of potential interest in explaining hydrogen embrittlement in ferrous alloys as well as hydride formers. The discontinuous nature of cracking due to hydrogen embrittlement appears to depend very strongly on localized stress intensities, thereby pointing to the role of microstructure in influencing crack initiation, fracture mode and crack path. The initiation of hydrogen induced failures over relatively short periods of time can be characterized with fair reliability using measurements of the threshold stress intensity. The experimental conditions for determining K/sub Th/ and ..delta..K/sub Th/ are designed to ensure plane strain conditions in most cases. Plane strain test conditions may be viewed as a conservative basis for predicting delayed failure. The physical configuration of nuclear waste canisters may involve elastic/plastic conditions rather than a state of plane strain, especially with thin-walled vessels. Under these conditions, alternative predictive tests may be considered, including COD and R-curve methods. The double cantilever beam technique employed by Boyer and Spurr on titanium alloys offers advantages for examining hydrogen induced delayed failure over long periods of time. 88 references. (DLC)
Peacor, Scott D; Peckarsky, Barbara L; Trussell, Geoffrey C; Vonesh, James R
2013-01-01
Defensive modifications in prey traits that reduce predation risk can also have negative effects on prey fitness. Such nonconsumptive effects (NCEs) of predators are common, often quite strong, and can even dominate the net effect of predators. We develop an intuitive graphical model to identify and explore the conditions promoting strong NCEs. The model illustrates two conditions necessary and sufficient for large NCEs: (1) trait change has a large cost, and (2) the benefit of reduced predation outweighs the costs, such as reduced growth rate. A corollary condition is that potential predation in the absence of trait change must be large. In fact, the sum total of the consumptive effects (CEs) and NCEs may be any value bounded by the magnitude of the predation rate in the absence of the trait change. The model further illustrates how, depending on the effect of increased trait change on resulting costs and benefits, any combination of strong and weak NCEs and CEs is possible. The model can also be used to examine how changes in environmental factors (e.g., refuge safety) or variation among predator-prey systems (e.g., different benefits of a prey trait change) affect NCEs. Results indicate that simple rules of thumb may not apply; factors that increase the cost of trait change or that increase the degree to which an animal changes a trait, can actually cause smaller (rather than larger) NCEs. We provide examples of how this graphical model can provide important insights for empirical studies from two natural systems. Implementation of this approach will improve our understanding of how and when NCEs are expected to dominate the total effect of predators. Further, application of the models will likely promote a better linkage between experimental and theoretical studies of NCEs, and foster synthesis across systems.
Data-Driven Modeling and Prediction of Arctic Sea Ice
Kondrashov, Dmitri; Chekroun, Mickael; Ghil, Michael
2016-04-01
We present results of data-driven predictive analyses of sea ice over the main Arctic regions. Our approach relies on the Multilayer Stochastic Modeling (MSM) framework of Kondrashov, Chekroun and Ghil [Physica D, 2015] and it leads to probabilistic prognostic models of sea ice concentration (SIC) anomalies on seasonal time scales. This approach is applied to monthly time series of state-of-the-art data-adaptive decompositions of SIC and selected climate variables over the Arctic. We evaluate the predictive skill of MSM models by performing retrospective forecasts with "no-look ahead" for up to 6-months ahead. It will be shown in particular that the memory effects included intrinsically in the formulation of our non-Markovian MSM models allow for improvements of the prediction skill of large-amplitude SIC anomalies in certain Arctic regions on the one hand, and of September Sea Ice Extent, on the other. Further improvements allowed by the MSM framework will adopt a nonlinear formulation and explore next-generation data-adaptive decompositions, namely modification of Principal Oscillation Patterns (POPs) and rotated Multichannel Singular Spectrum Analysis (M-SSA).
Grundy, John G; Shedden, Judith M
2014-05-01
In the present study, we examine electrophysiological correlates of factors influencing an adjustment in cognitive control known as the bivalency effect. During task-switching, the occasional presence of bivalent stimuli in a block of univalent trials is enough to elicit a response slowing on all subsequent univalent trials. Bivalent stimuli can be congruent or incongruent with respect to the response afforded by the irrelevant stimulus feature. Here we show that the incongruent bivalency effect, the congruent bivalency effect, and an effect of a simple violation of expectancy are captured at a frontal ERP component (between 300 and 550ms) associated with ACC activity, and that the unexpectedness effect is distinguished from both congruent and incongruent bivalency effects at an earlier component (100-120ms) associated with the temporal parietal junction. We suggest that the frontal component reflects the dACC's role in predicting future cognitive load based on recent history. In contrast, the posterior component may index early visual feature extraction in response to bivalent stimuli that cue currently ongoing tasks; dACC activity may trigger the temporal parietal activity only when specific task cueing is involved and not for simple violations of expectancy. Copyright © 2014 Elsevier Ltd. All rights reserved.
Institute of Scientific and Technical Information of China (English)
刘建; 王琪洁; 张昊
2013-01-01
Aiming to resolve the edge effect in the process of predicting length of day (LOD) by the least squares and autoregressive (LS+AR) model,we employed a time series analysis model to extrapolate LOD series and produce a new series.Then,we used the new series to solve the coefficients for the LS model.At last,we used the LS+AR model to predict the LOD series again.By comparing the accuracy of LOD prediction by edge-effect corrected LS +AR and that by LS+AR,we conclude that edge-effect corrected LS+AR can improve the prediction accuracy,especially for medium-term and long-term predictions.%针对LS+AR模型在日长变化预报过程中存在的端部效应现象,采用时间序列分析方法对日长变化的序列进行外推,形成一个新的序列,用这个新序列求得LS模型的系数,然后再用LS+ AR模型对日长变化原始序列进行预报.实验结果表明,利用端部效应改正的LS+ AR模型与LS+ AR模型相比,在日长变化的预报精度上有一定的改善,尤其在跨度为中长期时改善更为明显.
Modeling and simulation for heavy-duty mecanum wheel platform using model predictive control
Fuad, A. F. M.; Mahmood, I. A.; Ahmad, S.; Norsahperi, N. M. H.; Toha, S. F.; Akmeliawati, R.; Darsivan, F. J.
2017-03-01
This paper presents a study on a control system for a heavy-duty four Mecanum wheel platform. A mathematical model for the system is synthesized for the purpose of examining system behavior, including Mecanum wheel kinematics, AC servo motor, gearbox, and heavy duty load. The system is tested for velocity control, using model predictive control (MPC), and compared with a traditional PID setup. The parameters for the controllers are determined by manual tuning. Model predictive control was found to be more effective with reference to a linear velocity.
Predicting diabetic nephropathy using a multifactorial genetic model.
Directory of Open Access Journals (Sweden)
Ilana Blech
Full Text Available AIMS: The tendency to develop diabetic nephropathy is, in part, genetically determined, however this genetic risk is largely undefined. In this proof-of-concept study, we tested the hypothesis that combined analysis of multiple genetic variants can improve prediction. METHODS: Based on previous reports, we selected 27 SNPs in 15 genes from metabolic pathways involved in the pathogenesis of diabetic nephropathy and genotyped them in 1274 Ashkenazi or Sephardic Jewish patients with Type 1 or Type 2 diabetes of >10 years duration. A logistic regression model was built using a backward selection algorithm and SNPs nominally associated with nephropathy in our population. The model was validated by using random "training" (75% and "test" (25% subgroups of the original population and by applying the model to an independent dataset of 848 Ashkenazi patients. RESULTS: The logistic model based on 5 SNPs in 5 genes (HSPG2, NOS3, ADIPOR2, AGER, and CCL5 and 5 conventional variables (age, sex, ethnicity, diabetes type and duration, and allowing for all possible two-way interactions, predicted nephropathy in our initial population (C-statistic = 0.672 better than a model based on conventional variables only (C = 0.569. In the independent replication dataset, although the C-statistic of the genetic model decreased (0.576, it remained highly associated with diabetic nephropathy (χ(2 = 17.79, p<0.0001. In the replication dataset, the model based on conventional variables only was not associated with nephropathy (χ(2 = 3.2673, p = 0.07. CONCLUSION: In this proof-of-concept study, we developed and validated a genetic model in the Ashkenazi/Sephardic population predicting nephropathy more effectively than a similarly constructed non-genetic model. Further testing is required to determine if this modeling approach, using an optimally selected panel of genetic markers, can provide clinically useful prediction and if generic models can be
Keye, Stefan; Togiti, Vamish; Eisfeld, Bernhard; Brodersen, Olaf P.; Rivers, Melissa B.
2013-01-01
The accurate calculation of aerodynamic forces and moments is of significant importance during the design phase of an aircraft. Reynolds-averaged Navier-Stokes (RANS) based Computational Fluid Dynamics (CFD) has been strongly developed over the last two decades regarding robustness, efficiency, and capabilities for aerodynamically complex configurations. Incremental aerodynamic coefficients of different designs can be calculated with an acceptable reliability at the cruise design point of transonic aircraft for non-separated flows. But regarding absolute values as well as increments at off-design significant challenges still exist to compute aerodynamic data and the underlying flow physics with the accuracy required. In addition to drag, pitching moments are difficult to predict because small deviations of the pressure distributions, e.g. due to neglecting wing bending and twisting caused by the aerodynamic loads can result in large discrepancies compared to experimental data. Flow separations that start to develop at off-design conditions, e.g. in corner-flows, at trailing edges, or shock induced, can have a strong impact on the predictions of aerodynamic coefficients too. Based on these challenges faced by the CFD community a working group of the AIAA Applied Aerodynamics Technical Committee initiated in 2001 the CFD Drag Prediction Workshop (DPW) series resulting in five international workshops. The results of the participants and the committee are summarized in more than 120 papers. The latest, fifth workshop took place in June 2012 in conjunction with the 30th AIAA Applied Aerodynamics Conference. The results in this paper will evaluate the influence of static aeroelastic wing deformations onto pressure distributions and overall aerodynamic coefficients based on the NASA finite element structural model and the common grids.
Predictability in models of the atmospheric circulation.
Houtekamer, P.L.
1992-01-01
It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error are. The
Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations
Bowman, J.; Jensen, S.; McDonald, Mark
2010-10-01
High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.
Addressing Conceptual Model Uncertainty in the Evaluation of Model Prediction Errors
Carrera, J.; Pool, M.
2014-12-01
Model predictions are uncertain because of errors in model parameters, future forcing terms, and model concepts. The latter remain the largest and most difficult to assess source of uncertainty in long term model predictions. We first review existing methods to evaluate conceptual model uncertainty. We argue that they are highly sensitive to the ingenuity of the modeler, in the sense that they rely on the modeler's ability to propose alternative model concepts. Worse, we find that the standard practice of stochastic methods leads to poor, potentially biased and often too optimistic, estimation of actual model errors. This is bad news because stochastic methods are purported to properly represent uncertainty. We contend that the problem does not lie on the stochastic approach itself, but on the way it is applied. Specifically, stochastic inversion methodologies, which demand quantitative information, tend to ignore geological understanding, which is conceptually rich. We illustrate some of these problems with the application to Mar del Plata aquifer, where extensive data are available for nearly a century. Geologically based models, where spatial variability is handled through zonation, yield calibration fits similar to geostatiscally based models, but much better predictions. In fact, the appearance of the stochastic T fields is similar to the geologically based models only in areas with high density of data. We take this finding to illustrate the ability of stochastic models to accommodate many data, but also, ironically, their inability to address conceptual model uncertainty. In fact, stochastic model realizations tend to be too close to the "most likely" one (i.e., they do not really realize the full conceptualuncertainty). The second part of the presentation is devoted to argue that acknowledging model uncertainty may lead to qualitatively different decisions than just working with "most likely" model predictions. Therefore, efforts should concentrate on
Dinucleotide controlled null models for comparative RNA gene prediction
Directory of Open Access Journals (Sweden)
Gesell Tanja
2008-05-01
Full Text Available Abstract Background Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. Results We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. Conclusion SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require
A Model to Predict the Risk of Keratinocyte Carcinomas.
Whiteman, David C; Thompson, Bridie S; Thrift, Aaron P; Hughes, Maria-Celia; Muranushi, Chiho; Neale, Rachel E; Green, Adele C; Olsen, Catherine M
2016-06-01
Basal cell and squamous cell carcinomas of the skin are the commonest cancers in humans, yet no validated tools exist to estimate future risks of developing keratinocyte carcinomas. To develop a prediction tool, we used baseline data from a prospective cohort study (n = 38,726) in Queensland, Australia, and used data linkage to capture all surgically excised keratinocyte carcinomas arising within the cohort. Predictive factors were identified through stepwise logistic regression models. In secondary analyses, we derived separate models within strata of prior skin cancer history, age, and sex. The primary model included terms for 10 items. Factors with the strongest effects were >20 prior skin cancers excised (odds ratio 8.57, 95% confidence interval [95% CI] 6.73-10.91), >50 skin lesions destroyed (odds ratio 3.37, 95% CI 2.85-3.99), age ≥ 70 years (odds ratio 3.47, 95% CI 2.53-4.77), and fair skin color (odds ratio 1.75, 95% CI 1.42-2.15). Discrimination in the validation dataset was high (area under the receiver operator characteristic curve 0.80, 95% CI 0.79-0.81) and the model appeared well calibrated. Among those reporting no prior history of skin cancer, a similar model with 10 factors predicted keratinocyte carcinoma events with reasonable discrimination (area under the receiver operator characteristic curve 0.72, 95% CI 0.70-0.75). Algorithms using self-reported patient data have high accuracy for predicting risks of keratinocyte carcinomas.
Initial Integration of Noise Prediction Tools for Acoustic Scattering Effects
Nark, Douglas M.; Burley, Casey L.; Tinetti, Ana; Rawls, John W.
2008-01-01
This effort provides an initial glimpse at NASA capabilities available in predicting the scattering of fan noise from a non-conventional aircraft configuration. The Aircraft NOise Prediction Program, Fast Scattering Code, and the Rotorcraft Noise Model were coupled to provide increased fidelity models of scattering effects on engine fan noise sources. The integration of these codes led to the identification of several keys issues entailed in applying such multi-fidelity approaches. In particular, for prediction at noise certification points, the inclusion of distributed sources leads to complications with the source semi-sphere approach. Computational resource requirements limit the use of the higher fidelity scattering code to predict radiated sound pressure levels for full scale configurations at relevant frequencies. And, the ability to more accurately represent complex shielding surfaces in current lower fidelity models is necessary for general application to scattering predictions. This initial step in determining the potential benefits/costs of these new methods over the existing capabilities illustrates a number of the issues that must be addressed in the development of next generation aircraft system noise prediction tools.
Lepilleur, Carole; Mullay, John; Kyer, Carol; McCalister, Pam; Clifford, Ted
2011-01-01
Formulation composition has a dramatic influence on coacervate formation in conditioning shampoo. The purpose of this study is to correlate the amount of coacervate formation of novel cationic cassia polymers to the corresponding conditioning profiles on European brown hair using silicone deposition, cationic polymer deposition and sensory evaluation. A design of experiments was conducted by varying the levels of three surfactants (sodium lauryl ether sulfate, sodium lauryl sulfate, and cocamidopropyl betaine) in formulations containing cationic cassia polymers of different cationic charge density (1.7 and 3.0m Eq/g). The results show formulation composition dramatically affects physical properties, coacervation, silicone deposition, cationic polymer deposition and hair sensory attributes. Particularly, three parameters are of importance in determining silicone deposition: polymer charge, surfactant (micelle) charge and total amount of surfactant (micelle aspect ratio). Both sensory panel testing and silicone deposition results can be predicted with a high confidence level using statistical models that incorporate these parameters.
Survival model construction guided by fit and predictive strength.
Chauvel, Cécile; O'Quigley, John
2016-10-05
Survival model construction can be guided by goodness-of-fit techniques as well as measures of predictive strength. Here, we aim to bring together these distinct techniques within the context of a single framework. The goal is how to best characterize and code the effects of the variables, in particular time dependencies, when taken either singly or in combination with other related covariates. Simple graphical techniques can provide an immediate visual indication as to the goodness-of-fit but, in cases of departure from model assumptions, will point in the direction of a more involved and richer alternative model. These techniques appear to be intuitive. This intuition is backed up by formal theorems that underlie the process of building richer models from simpler ones. Measures of predictive strength are used in conjunction with these goodness-of-fit techniques and, again, formal theorems show that these measures can be used to help identify models closest to the unknown non-proportional hazards mechanism that we can suppose generates the observations. Illustrations from studies in breast cancer show how these tools can be of help in guiding the practical problem of efficient model construction for survival data.
Allostasis: a model of predictive regulation.
Sterling, Peter
2012-04-12
The premise of the standard regulatory model, "homeostasis", is flawed: the goal of regulation is not to preserve constancy of the internal milieu. Rather, it is to continually adjust the milieu to promote survival and reproduction. Regulatory mechanisms need to be efficient, but homeostasis (error-correction by feedback) is inherently inefficient. Thus, although feedbacks are certainly ubiquitous, they could not possibly serve as the primary regulatory mechanism. A newer model, "allostasis", proposes that efficient regulation requires anticipating needs and preparing to satisfy them before they arise. The advantages: (i) errors are reduced in magnitude and frequency; (ii) response capacities of different components are matched -- to prevent bottlenecks and reduce safety factors; (iii) resources are shared between systems to minimize reserve capacities; (iv) errors are remembered and used to reduce future errors. This regulatory strategy requires a dedicated organ, the brain. The brain tracks multitudinous variables and integrates their values with prior knowledge to predict needs and set priorities. The brain coordinates effectors to mobilize resources from modest bodily stores and enforces a system of flexible trade-offs: from each organ according to its ability, to each organ according to its need. The brain also helps regulate the internal milieu by governing anticipatory behavior. Thus, an animal conserves energy by moving to a warmer place - before it cools, and it conserves salt and water by moving to a cooler one before it sweats. The behavioral strategy requires continuously updating a set of specific "shopping lists" that document the growing need for each key component (warmth, food, salt, water). These appetites funnel into a common pathway that employs a "stick" to drive the organism toward filling the need, plus a "carrot" to relax the organism when the need is satisfied. The stick corresponds broadly to the sense of anxiety, and the carrot broadly to
Muñoz-Rojas, Miriam; Doro, Luca; Ledda, Luigi; Francaviglia, Rosa
2014-05-01
CarboSOIL is an empirical model based on regression techniques and developed to predict soil organic carbon contents (SOC) at standard soil depths of 0-25, 25-50 and 50-75 cm (Muñoz-Rojas et al., 2013). The model was applied to a study area of north-eastern Sardinia (Italy) (40° 46'N, 9° 10'E, mean altitude 285 m a.s.l.), characterized by extensive agro-silvo-pastoral systems which are typical of similar areas of the Mediterranean basin (e.g. the Iberian peninsula). The area has the same soil type (Haplic Endoleptic Cambisols, Dystric according to WRB), while cork oak forest (Quercus suber L.) is the potential native vegetation which has been converted to managed land with pastures and vineyards in recent years (Lagomarsino et al., 2011; Francaviglia et al., 2012; Bagella et al, 2013; Francaviglia et al., 2014). Six land uses with different levels of cropping intensification were compared: Tilled vineyards (TV); No-tilled grassed vineyards (GV); Hay crop (HC); Pasture (PA); Cork oak forest (CO) and Semi-natural systems (SN). The HC land use includes oats, Italian ryegrass and annual clovers or vetch for 5 years and intercropped by spontaneous herbaceous vegetation in the sixth year. The PA land use is 5 years of spontaneous herbaceous vegetation, and one year of intercropping with oats, Italian ryegrass and annual clovers or vetch cultivated as a hay crop. The SN land use (scrublands, Mediterranean maquis and Helichrysum meadows) arise from the natural re-vegetation of former vineyards which have been set-aside probably due to the low grape yields and the high cost of modern tillage equipment. Both PA and HC are grazed for some months during the year, and include scattered cork-oak trees, which are key components of the 'Dehesa'-type landscape (grazing system with Quercus L.) typical of this area of Sardinia and other areas of southern Mediterranean Europe. Dehesas are often converted to more profitable land uses such as vineyards (Francaviglia et al., 2012; Mu
Required Collaborative Work in Online Courses: A Predictive Modeling Approach
Smith, Marlene A.; Kellogg, Deborah L.
2015-01-01
This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…
A prediction model for assessing residential radon concentration in Switzerland
Hauri, D.D.; Huss, A.; Zimmermann, F.; Kuehni, C.E.; Roosli, M.
2012-01-01
Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the
STUDY OF RED TIDE PREDICTION MODEL FOR THE CHANGJIANG ESTUARY
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
This paper based on field data (on red tide water quality monitoring at the Changjiang River mouth and Hutoudu mariculture area in Zhejiang Province from May to August in 1995, and May to September in 1996) presents an effective model for short term prediction of red tide in the Changjiang Estuary. The measured parameters include: depth, temperature, color diaphaneity, density, DO, COD and nutrients (PO4-P, NO2-N, NO3-N, NH4-N). The model was checked by field-test data, and compared with other related models.The model: Z=SAL-3.95 DO-2.974 PH-5.421 PO4-P is suitable for application to the Shengsi aquiculture area near the Changjiang Estuary.
DEFF Research Database (Denmark)
Gao, Jie; Wang, Yi; Wargocki, Pawel
2015-01-01
In this paper, a comparative analysis was performed on the human thermal sensation estimated by modified predicted mean vote (PMV) models and modified standard effective temperature (SET) models in naturally ventilated buildings; the data were collected in field study. These prediction models were...... between the measured and predicted values using the modified PMV models exceeded 25%, while the difference between the measured thermal sensation and the predicted thermal sensation using modified SET models was approximately less than 25%. It is concluded that the modified SET models can predict human...... developed on the basis of the original PMV/SET models and consider the influence of occupants' expectations and human adaptive functions, including the extended PMV/SET models and the adaptive PMV/SET models. The results showed that when the indoor air velocity ranged from 0 to 0.2m/s and from 0.2 to 0.8m...
Predicting the Probability of Lightning Occurrence with Generalized Additive Models
Fabsic, Peter; Mayr, Georg; Simon, Thorsten; Zeileis, Achim
2017-04-01
This study investigates the predictability of lightning in complex terrain. The main objective is to estimate the probability of lightning occurrence in the Alpine region during summertime afternoons (12-18 UTC) at a spatial resolution of 64 × 64 km2. Lightning observations are obtained from the ALDIS lightning detection network. The probability of lightning occurrence is estimated using generalized additive models (GAM). GAMs provide a flexible modelling framework to estimate the relationship between covariates and the observations. The covariates, besides spatial and temporal effects, include numerous meteorological fields from the ECMWF ensemble system. The optimal model is chosen based on a forward selection procedure with out-of-sample mean squared error as a performance criterion. Our investigation shows that convective precipitation and mid-layer stability are the most influential meteorological predictors. Both exhibit intuitive, non-linear trends: higher values of convective precipitation indicate higher probability of lightning, and large values of the mid-layer stability measure imply low lightning potential. The performance of the model was evaluated against a climatology model containing both spatial and temporal effects. Taking the climatology model as a reference forecast, our model attains a Brier Skill Score of approximately 46%. The model's performance can be further enhanced by incorporating the information about lightning activity from the previous time step, which yields a Brier Skill Score of 48%. These scores show that the method is able to extract valuable information from the ensemble to produce reliable spatial forecasts of the lightning potential in the Alps.
Predicting plants -modeling traits as a function of environment
Franklin, Oskar
2016-04-01
A central problem in understanding and modeling vegetation dynamics is how to represent the variation in plant properties and function across different environments. Addressing this problem there is a strong trend towards trait-based approaches, where vegetation properties are functions of the distributions of functional traits rather than of species. Recently there has been enormous progress in in quantifying trait variability and its drivers and effects (Van Bodegom et al. 2012; Adier et al. 2014; Kunstler et al. 2015) based on wide ranging datasets on a small number of easily measured traits, such as specific leaf area (SLA), wood density and maximum plant height. However, plant function depends on many other traits and while the commonly measured trait data are valuable, they are not sufficient for driving predictive and mechanistic models of vegetation dynamics -especially under novel climate or management conditions. For this purpose we need a model to predict functional traits, also those not easily measured, and how they depend on the plants' environment. Here I present such a mechanistic model based on fitness concepts and focused on traits related to water and light limitation of trees, including: wood density, drought response, allocation to defense, and leaf traits. The model is able to predict observed patterns of variability in these traits in relation to growth and mortality, and their responses to a gradient of water limitation. The results demonstrate that it is possible to mechanistically predict plant traits as a function of the environment based on an eco-physiological model of plant fitness. References Adier, P.B., Salguero-Gómez, R., Compagnoni, A., Hsu, J.S., Ray-Mukherjee, J., Mbeau-Ache, C. et al. (2014). Functional traits explain variation in plant lifehistory strategies. Proc. Natl. Acad. Sci. U. S. A., 111, 740-745. Kunstler, G., Falster, D., Coomes, D.A., Hui, F., Kooyman, R.M., Laughlin, D.C. et al. (2015). Plant functional traits
Testing 40 Predictions from the Transtheoretical Model Again, with Confidence
Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.
2013-01-01
Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…
Effectiveness of predictive factors of canine intubation
Directory of Open Access Journals (Sweden)
Víctor Molina D
2017-01-01
Full Text Available Objetive. To determine predictors of effectiveness of airway intubation and its prognostic value, according to the morphology of the skull in dogs. Materials and methods. We performed a descriptive, observational study in two veterinary clinics, in Medellin city, Colombia. 74 dogs were evaluated randomly. All underwent Mallampati, Patil-Aldreti, Cormack-Lehane scale and distance sternum chin separated by skull morphology. Tukey test p≤0.05 were performed in skull morphology and predictive scales; in addition to principal component analysis and predictive scale race. Results. significant statistics and Mallampati between brachycephalic, dolichocephalic and mesocephalic (p=0.00, brachycephalic had difficult intubation, Cormack-Lehane differences were presented also similar between brachycephalic and the other two groups. Exhibited brachycephalic difficult intubation. Patil-Aldreti revealed brachycephalic demonstrate statistical difference from the others, with moderate difficulty. The sternum chin distance, showed no divergence for any of the three groups. Assessment predictor of intubation found that 13.51 % of dogs have difficult intubation, 37.83 % with moderate difficulty and 47.29 % showed slight difficulty for intubation. The average intubation attempts was 1.83 attempts and the average time was 123.43 sec. The main components determined that breeds Bulldog, is predicted by Mallampati and Patil-Aldreti while Pinscher is predicted by Cormack Lehane. Mixed races, is not influenced by a predictor. Conclusions. Brachycephalic type canines are those with greater difficulty of intubation and Mallampati is the main predictor factor.
Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models
Directory of Open Access Journals (Sweden)
Cheng-Hung Hsieh
2007-09-01
Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.
On hydrological model complexity, its geometrical interpretations and prediction uncertainty
Arkesteijn, E.C.M.M.; Pande, S.
2013-01-01
Knowledge of hydrological model complexity can aid selection of an optimal prediction model out of a set of available models. Optimal model selection is formalized as selection of the least complex model out of a subset of models that have lower empirical risk. This may be considered equivalent to
Predicting life satisfaction of the Angolan elderly: a structural model.
Gutiérrez, M; Tomás, J M; Galiana, L; Sancho, P; Cebrià, M A
2013-01-01
Satisfaction with life is of particular interest in the study of old age well-being because it has arisen as an important component of old age. A considerable amount of research has been done to explain life satisfaction in the elderly, and there is growing empirical evidence on best predictors of life satisfaction. This research evaluates the predictive power of some aging process variables, on Angolan elderly people's life satisfaction, while including perceived health into the model. Data for this research come from a cross-sectional survey of elderly people living in the capital of Angola, Luanda. A total of 1003 Angolan elderly were surveyed on socio-demographic information, perceived health, active engagement, generativity, and life satisfaction. A Multiple Indicators Multiple Causes model was built to test variables' predictive power on life satisfaction. The estimated theoretical model fitted the data well. The main predictors were those related to active engagement with others. Perceived health also had a significant and positive effect on life satisfaction. Several processes together may predict life satisfaction in the elderly population of Angola, and the variance accounted for it is large enough to be considered relevant. The key factor associated to life satisfaction seems to be active engagement with others.
Learning Predictive Movement Models From Fabric-Mounted Wearable Sensors.
Michael, Brendan; Howard, Matthew
2016-12-01
The measurement and analysis of human movement for applications in clinical diagnostics or rehabilitation is often performed in a laboratory setting using static motion capture devices. A growing interest in analyzing movement in everyday environments (such as the home) has prompted the development of "wearable sensors", with the most current wearable sensors being those embedded into clothing. A major issue however with the use of these fabric-embedded sensors is the undesired effect of fabric motion artefacts corrupting movement signals. In this paper, a nonparametric method is presented for learning body movements, viewing the undesired motion as stochastic perturbations to the sensed motion, and using orthogonal regression techniques to form predictive models of the wearer's motion that eliminate these errors in the learning process. Experiments in this paper show that standard nonparametric learning techniques underperform in this fabric motion context and that improved prediction accuracy can be made by using orthogonal regression techniques. Modelling this motion artefact problem as a stochastic learning problem shows an average 77% decrease in prediction error in a body pose task using fabric-embedded sensors, compared to a kinematic model.
An approach to model validation and model-based prediction -- polyurethane foam case study.
Energy Technology Data Exchange (ETDEWEB)
Dowding, Kevin J.; Rutherford, Brian Milne
2003-07-01
analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model
Jones, L.; Muhlfeld, C.; Marshall, L. A.
2013-12-01
Climate trends and projections have prompted interest in assessing the thermal sensitivity of aquatic species. How species will adapt and respond to these changes is uncertain, however, climatic and hydrologic changes may shift species habitat distributions and physiological functions both spatially and temporally. This is particularly true for salmonids (e.g., trout, char, and salmon), which are cold-water species strongly influenced by changes in temperature, flow, and physical habitat conditions. Therefore, understanding how habitats are likely to change and how species may respond to changes in climatic conditions is critical for developing conservation and management strategies. The purpose of this study is to develop a high-resolution stream temperature model for the Crown of the Continent Ecosystem (CCE) to simulate potential climate change impacts on thermal regimes throughout the riverscape. A spatially explicit statistical regression model is coupled with high-resolution climate data such as air temperature, precipitation, solar radiation, baseflow and surface runoff. This empirically based model is used to predict daily stream temperatures under historic, current and forecasted climate conditions. The model is parameterized with empirical stream temperature data, which has been gathered from agencies across the region. The current database of empirical stream temperature data consists of over 800 sites throughout the CCE, which provide time series data to the model application. The biological integration and application of this model is on bull trout (Salvelinus confluentus) populations within the CCE. The model will be used to assess species vulnerabilities caused by spatial and temporal changes in stream temperature and hydrology. By evaluating the magnitude, timing and duration of climatic changes on the riverscape, we can more accurately assess potential vulnerabilities of critical life history traits, such as growth potential, spawning migrations
Predictive modeling of dental pain using neural network.
Kim, Eun Yeob; Lim, Kun Ok; Rhee, Hyun Sill
2009-01-01
The mouth is a part of the body for ingesting food that is the most basic foundation and important part. The dental pain predicted by the neural network model. As a result of making a predictive modeling, the fitness of the predictive modeling of dental pain factors was 80.0%. As for the people who are likely to experience dental pain predicted by the neural network model, preventive measures including proper eating habits, education on oral hygiene, and stress release must precede any dental treatment.
Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics
Directory of Open Access Journals (Sweden)
Cecilia Noecker
2015-03-01
Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral
Contact prediction in protein modeling: Scoring, folding and refinement of coarse-grained models
Directory of Open Access Journals (Sweden)
Kolinski Andrzej
2008-08-01
Full Text Available Abstract Background Several different methods for contact prediction succeeded within the Sixth Critical Assessment of Techniques for Protein Structure Prediction (CASP6. The most relevant were non-local contact predictions for targets from the most difficult categories: fold recognition-analogy and new fold. Such contacts could provide valuable structural information in case a template structure cannot be found in the PDB. Results We described comprehensive tests of the effectiveness of contact data in various aspects of de novo modeling with CABS, an algorithm which was used successfully in CASP6 by the Kolinski-Bujnicki group. We used the predicted contacts in a simple scoring function for the post-simulation ranking of protein models and as a soft bias in the folding simulations and in the fold-refinement procedure. The latter approach turned out to be the most successful. The CABS force field used in the Replica Exchange Monte Carlo simulations cooperated with the true contacts and discriminated the false ones, which resulted in an improvement of the majority of Kolinski-Bujnicki's protein models. In the modeling we tested different sets of predicted contact data submitted to the CASP6 server. According to our results, the best performing were the contacts with the accuracy balanced with the coverage, obtained either from the best two predictors only or by a consensus from as many predictors as possible. Conclusion Our tests have shown that theoretically predicted contacts can be very beneficial for protein structure prediction. Depending on the protein modeling method, a contact data set applied should be prepared with differently balanced coverage and accuracy of predicted contacts. Namely, high coverage of contact data is important for the model ranking and high accuracy for the folding simulations.
Regional differences in prediction models of lung function in Germany
Directory of Open Access Journals (Sweden)
Schäper Christoph
2010-04-01
Full Text Available Abstract Background Little is known about the influencing potential of specific characteristics on lung function in different populations. The aim of this analysis was to determine whether lung function determinants differ between subpopulations within Germany and whether prediction equations developed for one subpopulation are also adequate for another subpopulation. Methods Within three studies (KORA C, SHIP-I, ECRHS-I in different areas of Germany 4059 adults performed lung function tests. The available data consisted of forced expiratory volume in one second, forced vital capacity and peak expiratory flow rate. For each study multivariate regression models were developed to predict lung function and Bland-Altman plots were established to evaluate the agreement between predicted and measured values. Results The final regression equations for FEV1 and FVC showed adjusted r-square values between 0.65 and 0.75, and for PEF they were between 0.46 and 0.61. In all studies gender, age, height and pack-years were significant determinants, each with a similar effect size. Regarding other predictors there were some, although not statistically significant, differences between the studies. Bland-Altman plots indicated that the regression models for each individual study adequately predict medium (i.e. normal but not extremely high or low lung function values in the whole study population. Conclusions Simple models with gender, age and height explain a substantial part of lung function variance whereas further determinants add less than 5% to the total explained r-squared, at least for FEV1 and FVC. Thus, for different adult subpopulations of Germany one simple model for each lung function measures is still sufficient.
Modeling of Pressure Effects in HVDC Cables
DEFF Research Database (Denmark)
Szabo, Peter; Hassager, Ole; Strøbech, Esben
1999-01-01
A model is developed for the prediction of pressure effects in HVDC mass impregnatedcables as a result of temperature changes.To test the model assumptions, experiments were performed in cable like geometries.It is concluded that the model may predict the formation of gas cavities....
George, David L.; Iverson, Richard M.
2014-01-01
We evaluate a new depth-averaged mathematical model that is designed to simulate all stages of debris-flow motion, from initiation to deposition. A companion paper shows how the model’s five governing equations describe simultaneous evolution of flow thickness, solid volume fraction, basal pore-fluid pressure, and two components of flow momentum. Each equation contains a source term that represents the influence of state-dependent granular dilatancy. Here we recapitulate the equations and analyze their eigenstructure to show that they form a hyperbolic system with desirable stability properties. To solve the equations we use a shock-capturing numerical scheme with adaptive mesh refinement, implemented in an open-source software package we call D-Claw. As tests of D-Claw, we compare model output with results from two sets of large-scale debris-flow experiments. One set focuses on flow initiation from landslides triggered by rising pore-water pressures, and the other focuses on downstream flow dynamics, runout, and deposition. D-Claw performs well in predicting evolution of flow speeds, thicknesses, and basal pore-fluid pressures measured in each type of experiment. Computational results illustrate the critical role of dilatancy in linking coevolution of the solid volume fraction and pore-fluid pressure, which mediates basal Coulomb friction and thereby regulates debris-flow dynamics.
Using Topic Modeling and Text Embeddings to Predict Deleted Tweets
Energy Technology Data Exchange (ETDEWEB)
Potash, Peter J.; Bell, Eric B.; Harrison, Joshua J.
2016-02-29
Predictive models for tweet deletion have been a relatively unexplored area of Twitter-related computational research. We first approach the deletion of tweets as a spam detection problem, applying a small set of handcrafted features to improve upon the current state-of-the- art in predicting deleted tweets. Next, we apply our approach to a dataset of deleted tweets that better reflects the current deletion rate. Since tweets are deleted for reasons beyond just the presence of spam, we apply topic modeling and text embeddings in order to capture the semantic content of tweets that can lead to tweet deletion. Our goal is to create an effective model that has a low-dimensional feature space and is also language-independent. A lean model would be computationally advantageous processing high-volumes of Twitter data, which can reach 9,885 tweets per second. Our results show that a small set of spam-related features combined with word topics and character-level text embeddings provide the best f1 when trained with a random forest model. The highest precision of the deleted tweet class is achieved by a modification of paragraph2vec to capture author identity.
Prediction of peptide bonding affinity: kernel methods for nonlinear modeling
Bergeron, Charles; Sundling, C Matthew; Krein, Michael; Katt, Bill; Sukumar, Nagamani; Breneman, Curt M; Bennett, Kristin P
2011-01-01
This paper presents regression models obtained from a process of blind prediction of peptide binding affinity from provided descriptors for several distinct datasets as part of the 2006 Comparative Evaluation of Prediction Algorithms (COEPRA) contest. This paper finds that kernel partial least squares, a nonlinear partial least squares (PLS) algorithm, outperforms PLS, and that the incorporation of transferable atom equivalent features improves predictive capability.
Werneke, Mark W; Edmond, Susan; Deutscher, Daniel; Ward, Jason; Grigsby, David; Young, Michelle; McGill, Troy; McClenahan, Brian; Weinberg, Jon; Davidow, Amy L
2016-09-01
Study Design Retrospective cohort. Background Patient-classification subgroupings may be important prognostic factors explaining outcomes. Objectives To determine effects of adding classification variables (McKenzie syndrome and pain patterns, including centralization and directional preference; Symptom Checklist Back Pain Prediction Model [SCL BPPM]; and the Fear-Avoidance Beliefs Questionnaire subscales of work and physical activity) to a baseline risk-adjusted model predicting functional status (FS) outcomes. Methods Consecutive patients completed a battery of questionnaires that gathered information on 11 risk-adjustment variables. Physical therapists trained in Mechanical Diagnosis and Therapy methods classified each patient by McKenzie syndromes and pain pattern. Functional status was assessed at discharge by patient-reported outcomes. Only patients with complete data were included. Risk of selection bias was assessed. Prediction of discharge FS was assessed using linear stepwise regression models, allowing 13 variables to enter the model. Significant variables were retained in subsequent models. Model power (R(2)) and beta coefficients for model variables were estimated. Results Two thousand sixty-six patients with lumbar impairments were evaluated. Of those, 994 (48%), 10 (variables to the baseline model did not result in significant increases in R(2). McKenzie syndrome or pain pattern explained 2.8% and 3.0% of the variance, respectively. When pain pattern and SCL BPPM were added simultaneously, overall model R(2) increased to 0.44. Although none of these increases in R(2) were significant, some classification variables were stronger predictors compared with some other variables included in the baseline model. Conclusion The small added prognostic capabilities identified when combining McKenzie or pain-pattern classifications with the SCL BPPM classification did not significantly improve prediction of FS outcomes in this study. Additional research is
Predictive Model of Graphene Based Polymer Nanocomposites: Electrical Performance
Manta, Asimina; Gresil, Matthieu; Soutis, Constantinos
2017-04-01
In this computational work, a new simulation tool on the graphene/polymer nanocomposites electrical response is developed based on the finite element method (FEM). This approach is built on the multi-scale multi-physics format, consisting of a unit cell and a representative volume element (RVE). The FE methodology is proven to be a reliable and flexible tool on the simulation of the electrical response without inducing the complexity of raw programming codes, while it is able to model any geometry, thus the response of any component. This characteristic is supported by its ability in preliminary stage to predict accurately the percolation threshold of experimental material structures and its sensitivity on the effect of different manufacturing methodologies. Especially, the percolation threshold of two material structures of the same constituents (PVDF/Graphene) prepared with different methods was predicted highlighting the effect of the material preparation on the filler distribution, percolation probability and percolation threshold. The assumption of the random filler distribution was proven to be efficient on modelling material structures obtained by solution methods, while the through-the -thickness normal particle distribution was more appropriate for nanocomposites constructed by film hot-pressing. Moreover, the parametrical analysis examine the effect of each parameter on the variables of the percolation law. These graphs could be used as a preliminary design tool for more effective material system manufacturing.
Comparisons of Faulting-Based Pavement Performance Prediction Models
Directory of Open Access Journals (Sweden)
Weina Wang
2017-01-01
Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.
Jack, Brady Michael; Lee, Ling; Yang, Kuay-Keng; Lin, Huann-shyang
2016-08-01
This study showcases the Science for Citizenship Model (SCM) as a new instructional methodology for presenting, to secondary students, science-related technology content related to the use of science in society not taught in the science curriculum, and a new approach for assessing the intercorrelations among three independent variables (benefits, risks, and trust) to predict the dependent variable of triggered interest in learning science. Utilizing a 50-minute instructional presentation on nanotechnology for citizenship, data were collected from 301 Taiwanese high school students. Structural equation modeling (SEM) and paired-samples t-tests were used to analyze the fitness of data to SCM and the extent to which a 50-minute class presentation of nanotechnology for citizenship affected students' awareness of benefits, risks, trust, and triggered interest in learning science. Results of SCM on pre-tests and post-tests revealed acceptable model fit to data and demonstrated that the strongest predictor of students' triggered interest in nanotechnology was their trust in science. Paired-samples t-test results on students' understanding of nanotechnology and their self-evaluated awareness of the benefits and risks of nanotechology, trust in scientists, and interest in learning science revealed low significant differences between pre-test and post-test. These results provide evidence that a short 50-minute presentation on an emerging science not normally addressed within traditional science curriculum had a significant yet limited impact on students' learning of nanotechnology in the classroom. Finally, we suggest why the results of this study may be important to science education instruction and research for understanding how the integration into classroom science education of short presentations of cutting-edge science and emerging technologies in support of the science for citizenship enterprise might be accomplished through future investigations.
Jack, Brady Michael; Lee, Ling; Yang, Kuay-Keng; Lin, Huann-shyang
2017-10-01
This study showcases the Science for Citizenship Model (SCM) as a new instructional methodology for presenting, to secondary students, science-related technology content related to the use of science in society not taught in the science curriculum, and a new approach for assessing the intercorrelations among three independent variables (benefits, risks, and trust) to predict the dependent variable of triggered interest in learning science. Utilizing a 50-minute instructional presentation on nanotechnology for citizenship, data were collected from 301 Taiwanese high school students. Structural equation modeling (SEM) and paired-samples t-tests were used to analyze the fitness of data to SCM and the extent to which a 50-minute class presentation of nanotechnology for citizenship affected students' awareness of benefits, risks, trust, and triggered interest in learning science. Results of SCM on pre-tests and post-tests revealed acceptable model fit to data and demonstrated that the strongest predictor of students' triggered interest in nanotechnology was their trust in science. Paired-samples t-test results on students' understanding of nanotechnology and their self-evaluated awareness of the benefits and risks of nanotechology, trust in scientists, and interest in learning science revealed low significant differences between pre-test and post-test. These results provide evidence that a short 50-minute presentation on an emerging science not normally addressed within traditional science curriculum had a significant yet limited impact on students' learning of nanotechnology in the classroom. Finally, we suggest why the results of this study may be important to science education instruction and research for understanding how the integration into classroom science education of short presentations of cutting-edge science and emerging technologies in support of the science for citizenship enterprise might be accomplished through future investigations.
Energy Technology Data Exchange (ETDEWEB)
Garisto, N.C.; Chambers, D.B.; Davis, M.W. [SENES Consultants Limited (Canada); Takala, J.M. [Cameco Corp., Saskatchewan (Canada); Krochak, D. [TAEM, (United States); Barsi, R. [Cogema Resources Inc., Saskatoon, Saskatchewan (Canada); Bartell, S.M. [SENES Oak Ridge, Oak Ridge, TN (United States)
1997-07-01
Considerable effort has been devoted to identifying and evaluating potential impacts from uranium mining on people and the environment. This includes field and laboratory experiments as well as pathways modelling and ecological risk assessment. Studies to date generally indicate that unless biota reside within a tailings waste management area, there is little incremental ecological impact (observed or calculated). Furthermore, there are no significant population-level or community-level impacts on biota in the vicinity of uranium mining and milling operations. The practical experience gained from these studies shows that it is advantageous to exploit the complementary nature of data and models in designing monitoring plans for potential ecological impacts. In particular, the effectiveness of environmental monitoring can be enhanced by providing a feedback loop from the modelling results to the monitoring plan. (author)
Prediction using patient comparison vs. modeling: a case study for mortality prediction.
Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter
2016-08-01
Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.
Directory of Open Access Journals (Sweden)
Mihaela Simionescu
2014-12-01
Full Text Available There are many types of econometric models used in predicting the inflation rate, but in this study we used a Bayesian shrinkage combination approach. This methodology is used in order to improve the predictions accuracy by including information that is not captured by the econometric models. Therefore, experts’ forecasts are utilized as prior information, for Romania these predictions being provided by Institute for Economic Forecasting (Dobrescu macromodel, National Commission for Prognosis and European Commission. The empirical results for Romanian inflation show the superiority of a fixed effects model compared to other types of econometric models like VAR, Bayesian VAR, simultaneous equations model, dynamic model, log-linear model. The Bayesian combinations that used experts’ predictions as priors, when the shrinkage parameter tends to infinite, improved the accuracy of all forecasts based on individual models, outperforming also zero and equal weights predictions and naïve forecasts.
Directory of Open Access Journals (Sweden)
Kennedy Curtis E
2011-10-01
Full Text Available Abstract Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1 selecting candidate variables; 2 specifying measurement parameters; 3 defining data format; 4 defining time window duration and resolution; 5 calculating latent variables for candidate variables not directly measured; 6 calculating time series features as latent variables; 7 creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8
Fuzzy predictive filtering in nonlinear economic model predictive control for demand response
DEFF Research Database (Denmark)
Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.;
2016-01-01
The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...... problem. Moreover, to reduce the computation time and improve the controller's performance, a fuzzy predictive filter is introduced. With the purpose of testing the developed EMPC, a simulation controlling the temperature levels of an intelligent office building (PowerFlexHouse), with and without fuzzy...
Predictive modeling and reducing cyclic variability in autoignition engines
Energy Technology Data Exchange (ETDEWEB)
Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob
2016-08-30
Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.
DEFF Research Database (Denmark)
Rosthøj, Susanne; Keiding, Niels
2004-01-01
When studying a regression model measures of explained variation are used to assess the degree to which the covariates determine the outcome of interest. Measures of predictive accuracy are used to assess the accuracy of the predictions based on the covariates and the regression model. We give...... a detailed and general introduction to the two measures and the estimation procedures. The framework we set up allows for a study of the effect of misspecification on the quantities estimated. We also introduce a generalization to survival analysis....
Weber, Olaf; Willmann, Stefan; Bischoff, Hilmar; Li, Volkhart; Vakalopoulos, Alexandros; Lustig, Klemens; Hafner, Frank-Thorsten; Heinig, Roland; Schmeck, Carsten; Buehner, Klaus
2012-01-01
AIMS The purpose of this work was to support the prediction of a potentially effective dose for the CETP-inhibitor, BAY 60–5521, in humans. METHODS A combination of allometric scaling of the pharmacokinetics of the CETP-inhibitor BAY 60–5521 with pharmacodynamic studies in CETP-transgenic mice and in human plasma with physiologically-based pharmacokinetic (PBPK) modelling was used to support the selection of the first-in-man dose. RESULTS The PBPK approach predicts a greater extent of distribution for BAY 60–5521 in humans compared with the allometric scaling method as reflected by a larger predicted volume of distribution and longer elimination half-life. The combined approach led to an estimate of a potentially effective dose for BAY 60–5521 of 51 mg in humans. CONCLUSION The approach described in this paper supported the prediction of a potentially effective dose for the CETP-inhibitor BAY 60–5521 in humans. Confirmation of the dose estimate was obtained in a first-in-man study. PMID:21762205
A Model for Flooding Prediction in Circular Tubes
Institute of Scientific and Technical Information of China (English)
G.P.Celate; S.Banerjee; 等
1992-01-01
Flooding phenomenon limits the stability and the flow of a liquid film falling along the walls of a channel in which a gas in flowing upwards.As knows,the entrainment effect can completely prevent the liquid to fall from its natural flow.The resesent work proposes a new mechanistic model for the prediction of the onset of floodung in vertical and inclined pipes in the presence of obstructions,as well as taking into account the viscosity effect.The good performance of the model in the different geometrical conditions and for variable viscosities of the liquid component assesses the validity of the hypothesis that the instability of a wavelike disturbance limits the countercurrent flow in a channel.
Exchange Rate Prediction using Neural – Genetic Model
Directory of Open Access Journals (Sweden)
MECHGOUG Raihane
2012-10-01
Full Text Available Neural network have successfully used for exchange rate forecasting. However, due to a large number of parameters to be estimated empirically, it is not a simple task to select the appropriate neural network architecture for exchange rate forecasting problem.Researchers often overlook the effect of neural network parameters on the performance of neural network forecasting. The performance of neural network is critically dependant on the learning algorithms, thenetwork architecture and the choice of the control parameters. Even when a suitable setting of parameters (weight can be found, the ability of the resulting network to generalize the data not seen during learning may be far from optimal. For these reasons it seemslogical and attractive to apply genetic algorithms. Genetic algorithms may provide a useful tool for automating the design of neural network. The empirical results on foreign exchange rate prediction indicate that the proposed hybrid model exhibits effectively improved accuracy, when is compared with some other time series forecasting models.
Intelligent predictive model of ventilating capacity of imperial smelt furnace
Institute of Scientific and Technical Information of China (English)
唐朝晖; 胡燕瑜; 桂卫华; 吴敏
2003-01-01
In order to know the ventilating capacity of imperial smelt furnace (ISF), and increase the output of plumbum, an intelligent modeling method based on gray theory and artificial neural networks(ANN) is proposed, in which the weight values in the integrated model can be adjusted automatically. An intelligent predictive model of the ventilating capacity of the ISF is established and analyzed by the method. The simulation results and industrial applications demonstrate that the predictive model is close to the real plant, the relative predictive error is 0.72%, which is 50% less than the single model, leading to a notable increase of the output of plumbum.
A Prediction Model of the Capillary Pressure J-Function
Xu, W. S.; Luo, P. Y.; Sun, L.; Lin, N.
2016-01-01
The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701
Adaptation of Predictive Models to PDA Hand-Held Devices
Directory of Open Access Journals (Sweden)
Lin, Edward J
2008-01-01
Full Text Available Prediction models using multiple logistic regression are appearing with increasing frequency in the medical literature. Problems associated with these models include the complexity of computations when applied in their pure form, and lack of availability at the bedside. Personal digital assistant (PDA hand-held devices equipped with spreadsheet software offer the clinician a readily available and easily applied means of applying predictive models at the bedside. The purposes of this article are to briefly review regression as a means of creating predictive models and to describe a method of choosing and adapting logistic regression models to emergency department (ED clinical practice.
A model to predict the power output from wind farms
Energy Technology Data Exchange (ETDEWEB)
Landberg, L. [Riso National Lab., Roskilde (Denmark)
1997-12-31
This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.
Modelling microbial interactions and food structure in predictive microbiology
Malakar, P.K.
2002-01-01
Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.
Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of
Modelling microbial interactions and food structure in predictive microbiology
Malakar, P.K.
2002-01-01
Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology. Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of new technologies
Predictive modeling of gingivitis severity and susceptibility via oral microbiota.
Huang, Shi; Li, Rui; Zeng, Xiaowei; He, Tao; Zhao, Helen; Chang, Alice; Bo, Cunpei; Chen, Jie; Yang, Fang; Knight, Rob; Liu, Jiquan; Davis, Catherine; Xu, Jian
2014-09-01
Predictive modeling of human disease based on the microbiota holds great potential yet remains challenging. Here, 50 adults underwent controlled transitions from naturally occurring gingivitis, to healthy gingivae (baseline), and to experimental gingivitis (EG). In diseased plaque microbiota, 27 bacterial genera changed in relative abundance and functional genes including 33 flagellar biosynthesis-related groups were enriched. Plaque microbiota structure exhibited a continuous gradient along the first principal component, reflecting transition from healthy to diseased states, which correlated with Mazza Gingival Index. We identified two host types with distinct gingivitis sensitivity. Our proposed microbial indices of gingivitis classified host types with 74% reliability, and, when tested on another 41-member cohort, distinguished healthy from diseased individuals with 95% accuracy. Furthermore, the state of the microbiota in naturally occurring gingivitis predicted the microbiota state and severity of subsequent EG (but not the state of the microbiota during the healthy baseline period). Because the effect of disease is greater than interpersonal variation in plaque, in contrast to the gut, plaque microbiota may provide advantages in predictive modeling of oral diseases.
Predicting Career Advancement with Structural Equation Modelling
Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia
2012-01-01
Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…
Predicting Career Advancement with Structural Equation Modelling
Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia
2012-01-01
Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…
Preservation engineering assets developed from an oxidation predictive model
Directory of Open Access Journals (Sweden)
Coutelieris Frank A.
2016-01-01
Full Text Available A previously developed model which effectively predicts the probability of olive oil reaching the end of its shelf-life within a certain time frame was tested for its response when the convective diffusion of oxygen through packaging material is taken in account. Darcy’s Law was used to correlate the packaging permeability with the oxygen flow through the packaging materials. Mass transport within the food-packaging system was considered transient and the relative one-dimensional differential equations along with appropriate initial and boundary conditions were numerically solved. When the Peclet (Pe number was used to validate the significance of the oxygen transport mechanism through packaging, the model results confirmed the Arrhenius type dependency of diffusion, where the slope of the line per material actually indicated their –Ea/R. Furthermore, Pe could not be correlated to the hexanal produced in samples stored under light. Photo-oxidation has a significant role in the oxidative degradation of olive oil confirmed by the shelf-assessing test. The validity of our model for the oxygen diffusion driven systems, was also confirmed, for that reason the predictive boundaries were set. Results safely indicated the significance of applying a self-assessing process to confirm the packaging selection process for oxygen sensitive food via this model.
Hologram QSAR model for the prediction of human oral bioavailability.
Moda, Tiago L; Montanari, Carlos A; Andricopulo, Adriano D
2007-12-15
A drug intended for use in humans should have an ideal balance of pharmacokinetics and safety, as well as potency and selectivity. Unfavorable pharmacokinetics can negatively affect the clinical development of many otherwise promising drug candidates. A variety of in silico ADME (absorption, distribution, metabolism, and excretion) models are receiving increased attention due to a better appreciation that pharmacokinetic properties should be considered in early phases of the drug discovery process. Human oral bioavailability is an important pharmacokinetic property, which is directly related to the amount of drug available in the systemic circulation to exert pharmacological and therapeutic effects. In the present work, hologram quantitative structure-activity relationships (HQSAR) were performed on a training set of 250 structurally diverse molecules with known human oral bioavailability. The most significant HQSAR model (q(2)=0.70, r(2)=0.93) was obtained using atoms, bond, connection, and chirality as fragment distinction. The predictive ability of the model was evaluated by an external test set containing 52 molecules not included in the training set, and the predicted values were in good agreement with the experimental values. The HQSAR model should be useful for the design of new drug candidates having increased bioavailability as well as in the process of chemical library design, virtual screening, and high-throughput screening.
A nonlinear regression model-based predictive control algorithm.
Dubay, R; Abu-Ayyad, M; Hernandez, J M
2009-04-01
This paper presents a unique approach for designing a nonlinear regression model-based predictive controller (NRPC) for single-input-single-output (SISO) and multi-input-multi-output (MIMO) processes that are common in industrial applications. The innovation of this strategy is that the controller structure allows nonlinear open-loop modeling to be conducted while closed-loop control is executed every sampling instant. Consequently, the system matrix is regenerated every sampling instant using a continuous function providing a more accurate prediction of the plant. Computer simulations are carried out on nonlinear plants, demonstrating that the new approach is easily implemented and provides tight control. Also, the proposed algorithm is implemented on two real time SISO applications; a DC motor, a plastic injection molding machine and a nonlinear MIMO thermal system comprising three temperature zones to be controlled with interacting effects. The experimental closed-loop responses of the proposed algorithm were compared to a multi-model dynamic matrix controller (MPC) with improved results for various set point trajectories. Good disturbance rejection was attained, resulting in improved tracking of multi-set point profiles in comparison to multi-model MPC.
Prediction Model of Sewing Technical Condition by Grey Neural Network
Institute of Scientific and Technical Information of China (English)
DONG Ying; FANG Fang; ZHANG Wei-yuan
2007-01-01
The grey system theory and the artificial neural network technology were applied to predict the sewing technical condition. The representative parameters, such as needle, stitch, were selected. Prediction model was established based on the different fabrics' mechanical properties that measured by KES instrument. Grey relevant degree analysis was applied to choose the input parameters of the neural network. The result showed that prediction model has good precision. The average relative error was 4.08% for needle and 4.25% for stitch.
Active diagnosis of hybrid systems - A model predictive approach
2009-01-01
A method for active diagnosis of hybrid systems is proposed. The main idea is to predict the future output of both normal and faulty model of the system; then at each time step an optimization problem is solved with the objective of maximizing the difference between the predicted normal and faulty outputs constrained by tolerable performance requirements. As in standard model predictive control, the first element of the optimal input is applied to the system and the whole procedure is repeate...
Evaluation of Fast-Time Wake Vortex Prediction Models
Proctor, Fred H.; Hamilton, David W.
2009-01-01
Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.
Comparison of Simple Versus Performance-Based Fall Prediction Models
Directory of Open Access Journals (Sweden)
Shekhar K. Gadkaree BS
2015-05-01
Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “any fall” and “recurrent falls.” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.
Testing and analysis of internal hardwood log defect prediction models
R. Edward. Thomas
2011-01-01
The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...
Comparison of Simple Versus Performance-Based Fall Prediction Models
Directory of Open Access Journals (Sweden)
Shekhar K. Gadkaree BS
2015-05-01
Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Refining the committee approach and uncertainty prediction in hydrological modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Refining the committee approach and uncertainty prediction in hydrological modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Modeling and Model Predictive Power and Rate Control of Wireless Communication Networks
Directory of Open Access Journals (Sweden)
Cunwu Han
2014-01-01
Full Text Available A novel power and rate control system model for wireless communication networks is presented, which includes uncertainties, input constraints, and time-varying delays in both state and control input. A robust delay-dependent model predictive power and rate control method is proposed, and the state feedback control law is obtained by solving an optimization problem that is derived by using linear matrix inequality (LMI techniques. Simulation results are given to illustrate the effectiveness of the proposed method.
Institute of Scientific and Technical Information of China (English)
钟伟民; 何国龙; 皮道映; 孙优贤
2005-01-01
A support vector machine (SVM) with quadratic polynomial kernel function based nonlinear model one-step-ahead predictive controller is presented. The SVM based predictive model is established with black-box identification method. By solving a cubic equation in the feature space, an explicit predictive control law is obtained through the predictive control mechanism. The effect of controller is demonstrated on a recognized benchmark problem and on the control of continuous-stirred tank reactor (CSTR). Simulation results show that SVM with quadratic polynomial kernel function based predictive controller can be well applied to nonlinear systems, with good performance in following reference trajectory as well as in disturbance-rejection.
Energy Technology Data Exchange (ETDEWEB)
Salazar, Ramon B., E-mail: ramon@purdue.edu, E-mail: hilatikh@purdue.edu; Appenzeller, Joerg [Birck Nanotechnology Center, Purdue University, 1205 W. State Street, West Lafayette, Indiana 47907 (United States); Ilatikhameneh, Hesameddin, E-mail: ramon@purdue.edu, E-mail: hilatikh@purdue.edu; Rahman, Rajib; Klimeck, Gerhard [Network for Computational Nanotechnology, 207 S. Martin Jischke Drive, West Lafayette, Indiana 47907 (United States)
2015-10-28
A new compact modeling approach is presented which describes the full current-voltage (I-V) characteristic of high-performance (aggressively scaled-down) tunneling field-effect-transistors (TFETs) based on homojunction direct-bandgap semiconductors. The model is based on an analytic description of two key features, which capture the main physical phenomena related to TFETs: (1) the potential profile from source to channel and (2) the elliptic curvature of the complex bands in the bandgap region. It is proposed to use 1D Poisson's equations in the source and the channel to describe the potential profile in homojunction TFETs. This allows to quantify the impact of source/drain doping on device performance, an aspect usually ignored in TFET modeling but highly relevant in ultra-scaled devices. The compact model is validated by comparison with state-of-the-art quantum transport simulations using a 3D full band atomistic approach based on non-equilibrium Green's functions. It is shown that the model reproduces with good accuracy the data obtained from the simulations in all regions of operation: the on/off states and the n/p branches of conduction. This approach allows calculation of energy-dependent band-to-band tunneling currents in TFETs, a feature that allows gaining deep insights into the underlying device physics. The simplicity and accuracy of the approach provide a powerful tool to explore in a quantitatively manner how a wide variety of parameters (material-, size-, and/or geometry-dependent) impact the TFET performance under any bias conditions. The proposed model presents thus a practical complement to computationally expensive simulations such as the 3D NEGF approach.
Predictive Multiscale Modeling of Nanocellulose Based Materials and Systems
Kovalenko, Andriy
2014-08-01
Cellulose Nanocrysals (CNC) is a renewable biodegradable biopolymer with outstanding mechanical properties made from highly abundant natural source, and therefore is very attractive as reinforcing additive to replace petroleum-based plastics in biocomposite materials, foams, and gels. Large-scale applications of CNC are currently limited due to its low solubility in non-polar organic solvents used in existing polymerization technologies. The solvation properties of CNC can be improved by chemical modification of its surface. Development of effective surface modifications has been rather slow because extensive chemical modifications destabilize the hydrogen bonding network of cellulose and deteriorate the mechanical properties of CNC. We employ predictive multiscale theory, modeling, and simulation to gain a fundamental insight into the effect of CNC surface modifications on hydrogen bonding, CNC crystallinity, solvation thermodynamics, and CNC compatibilization with the existing polymerization technologies, so as to rationally design green nanomaterials with improved solubility in non-polar solvents, controlled liquid crystal ordering and optimized extrusion properties. An essential part of this multiscale modeling approach is the statistical- mechanical 3D-RISM-KH molecular theory of solvation, coupled with quantum mechanics, molecular mechanics, and multistep molecular dynamics simulation. The 3D-RISM-KH theory provides predictive modeling of both polar and non-polar solvents, solvent mixtures, and electrolyte solutions in a wide range of concentrations and thermodynamic states. It properly accounts for effective interactions in solution such as steric effects, hydrophobicity and hydrophilicity, hydrogen bonding, salt bridges, buffer, co-solvent, and successfully predicts solvation effects and processes in bulk liquids, solvation layers at solid surface, and in pockets and other inner spaces of macromolecules and supramolecular assemblies. This methodology
Impact of modellers' decisions on hydrological a priori predictions
Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.
2014-06-01
In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of
Econometric models for predicting confusion crop ratios
Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)
1979-01-01
Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.
PEEX Modelling Platform for Seamless Environmental Prediction
Baklanov, Alexander