Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M
2015-07-01
Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.
Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.
Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F
2013-04-01
In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.
Nonlinear mixed-effects modeling: individualization and prediction.
Olofsen, Erik; Dinges, David F; Van Dongen, Hans P A
2004-03-01
The development of biomathematical models for the prediction of fatigue and performance relies on statistical techniques to analyze experimental data and model simulations. Statistical models of empirical data have adjustable parameters with a priori unknown values. Interindividual variability in estimates of those values requires a form of smoothing. This traditionally consists of averaging observations across subjects, or fitting a model to the data of individual subjects first and subsequently averaging the parameter estimates. However, the standard errors of the parameter estimates are assessed inaccurately by such averaging methods. The reason is that intra- and inter-individual variabilities are intertwined. They can be separated by mixed-effects modeling in which model predictions are not only determined by fixed effects (usually constant parameters or functions of time) but also by random effects, describing the sampling of subject-specific parameter values from probability distributions. By estimating the parameters of the distributions of the random effects, mixed-effects models can describe experimental observations involving multiple subjects properly (i.e., yielding correct estimates of the standard errors) and parsimoniously (i.e., estimating no more parameters than necessary). Using a Bayesian approach, mixed-effects models can be "individualized" as observations are acquired that capture the unique characteristics of the individual at hand. Mixed-effects models, therefore, have unique advantages in research on human neurobehavioral functions, which frequently show large inter-individual differences. To illustrate this we analyzed laboratory neurobehavioral performance data acquired during sleep deprivation, using a nonlinear mixed-effects model. The results serve to demonstrate the usefulness of mixed-effects modeling for data-driven development of individualized predictive models of fatigue and performance.
Modeling the prediction of business intelligence system effectiveness.
Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I
2016-01-01
Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important.
Modelling the electrical properties of concrete for shielding effectiveness prediction
International Nuclear Information System (INIS)
Sandrolini, L; Reggiani, U; Ogunsola, A
2007-01-01
Concrete is a porous, heterogeneous material whose abundant use in numerous applications demands a detailed understanding of its electrical properties. Besides experimental measurements, material theoretical models can be useful to investigate its behaviour with respect to frequency, moisture content or other factors. These models can be used in electromagnetic compatibility (EMC) to predict the shielding effectiveness of a concrete structure against external electromagnetic waves. This paper presents the development of a dispersive material model for concrete out of experimental measurement data to take account of the frequency dependence of concrete's electrical properties. The model is implemented into a numerical simulator and compared with the classical transmission-line approach in shielding effectiveness calculations of simple concrete walls of different moisture content. The comparative results show good agreement in all cases; a possible relation between shielding effectiveness and the electrical properties of concrete and the limits of the proposed model are discussed
Modelling the electrical properties of concrete for shielding effectiveness prediction
Sandrolini, L.; Reggiani, U.; Ogunsola, A.
2007-09-01
Concrete is a porous, heterogeneous material whose abundant use in numerous applications demands a detailed understanding of its electrical properties. Besides experimental measurements, material theoretical models can be useful to investigate its behaviour with respect to frequency, moisture content or other factors. These models can be used in electromagnetic compatibility (EMC) to predict the shielding effectiveness of a concrete structure against external electromagnetic waves. This paper presents the development of a dispersive material model for concrete out of experimental measurement data to take account of the frequency dependence of concrete's electrical properties. The model is implemented into a numerical simulator and compared with the classical transmission-line approach in shielding effectiveness calculations of simple concrete walls of different moisture content. The comparative results show good agreement in all cases; a possible relation between shielding effectiveness and the electrical properties of concrete and the limits of the proposed model are discussed.
Effect of misreported family history on Mendelian mutation prediction models.
Katki, Hormuzd A
2006-06-01
People with familial history of disease often consult with genetic counselors about their chance of carrying mutations that increase disease risk. To aid them, genetic counselors use Mendelian models that predict whether the person carries deleterious mutations based on their reported family history. Such models rely on accurate reporting of each member's diagnosis and age of diagnosis, but this information may be inaccurate. Commonly encountered errors in family history can significantly distort predictions, and thus can alter the clinical management of people undergoing counseling, screening, or genetic testing. We derive general results about the distortion in the carrier probability estimate caused by misreported diagnoses in relatives. We show that the Bayes factor that channels all family history information has a convenient and intuitive interpretation. We focus on the ratio of the carrier odds given correct diagnosis versus given misreported diagnosis to measure the impact of errors. We derive the general form of this ratio and approximate it in realistic cases. Misreported age of diagnosis usually causes less distortion than misreported diagnosis. This is the first systematic quantitative assessment of the effect of misreported family history on mutation prediction. We apply the results to the BRCAPRO model, which predicts the risk of carrying a mutation in the breast and ovarian cancer genes BRCA1 and BRCA2.
Use of nonlinear dose-effect models to predict consequences
International Nuclear Information System (INIS)
Seiler, F.A.; Alvarez, J.L.
1996-01-01
The linear dose-effect relationship was introduced as a model for the induction of cancer from exposure to nuclear radiation. Subsequently, it has been used by analogy to assess the risk of chemical carcinogens also. Recently, however, the model for radiation carcinogenesis has come increasingly under attack because its calculations contradict the epidemiological data, such as cancer in atomic bomb survivors. Even so, its proponents vigorously defend it, often using arguments that are not so much scientific as a mix of scientific, societal, and often political arguments. At least in part, the resilience of the linear model is due to two convenient properties that are exclusive to linearity: First, the risk of an event is determined solely by the event dose; second, the total risk of a population group depends only on the total population dose. In reality, the linear model has been conclusively falsified; i.e., it has been shown to make wrong predictions, and once this fact is generally realized, the scientific method calls for a new paradigm model. As all alternative models are by necessity nonlinear, all the convenient properties of the linear model are invalid, and calculational procedures have to be used that are appropriate for nonlinear models
Modeling for prediction of restrained shrinkage effect in concrete repair
International Nuclear Information System (INIS)
Yuan Yingshu; Li Guo; Cai Yue
2003-01-01
A general model of autogenous shrinkage caused by chemical reaction (chemical shrinkage) is developed by means of Arrhenius' law and a degree of chemical reaction. Models of tensile creep and relaxation modulus are built based on a viscoelastic, three-element model. Tests of free shrinkage and tensile creep were carried out to determine some coefficients in the models. Two-dimensional FEM analysis based on the models and other constitutions can predict the development of tensile strength and cracking. Three groups of patch-repaired beams were designed for analysis and testing. The prediction from the analysis shows agreement with the test results. The cracking mechanism after repair is discussed
ALE: Additive Latent Effect Models for Grade Prediction
Ren, Zhiyun; Ning, Xia; Rangwala, Huzefa
2018-01-01
The past decade has seen a growth in the development and deployment of educational technologies for assisting college-going students in choosing majors, selecting courses and acquiring feedback based on past academic performance. Grade prediction methods seek to estimate a grade that a student may achieve in a course that she may take in the future (e.g., next term). Accurate and timely prediction of students' academic grades is important for developing effective degree planners and early war...
Effects of uncertainty in model predictions of individual tree volume on large area volume estimates
Ronald E. McRoberts; James A. Westfall
2014-01-01
Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...
Modelling the Effects of a Predictable Money Supply of Bitcoin
Directory of Open Access Journals (Sweden)
Jakub Jedlinský
2017-12-01
Full Text Available The paper examines effects of a predefined and immutable money supply using a simulation performed in Minsky. It uses the cryptocurrency Bitcoin as an example and compares its settings and outcomes with Euro as a credit based fiat currency. Minsky is a specialized software for creating SFC economic models. It operates in continuous time. Unlike Euro, Bitcoin is a non-liability currency. It is not being issued against debt and it does not allow a fiduciary issue. The study examines the economy of the EU complexly, focusing on its monetary system, using Eurostat data. Then it changes the rules of the system so that they comply with the rules of Bitcoin’s protocol. The performed simulations show different effects of these monetary settings on wealth distribution among particular groups of economic subjects as well as on the stability of the economy as a whole after some time has passed.
Romañach, Stephanie; Watling, James I.; Fletcher, Robert J.; Speroterra, Carolina; Bucklin, David N.; Brandt, Laura A.; Pearlstine, Leonard G.; Escribano, Yesenia; Mazzotti, Frank J.
2014-01-01
Climate change poses new challenges for natural resource managers. Predictive modeling of species–environment relationships using climate envelope models can enhance our understanding of climate change effects on biodiversity, assist in assessment of invasion risk by exotic organisms, and inform life-history understanding of individual species. While increasing interest has focused on the role of uncertainty in future conditions on model predictions, models also may be sensitive to the initial conditions on which they are trained. Although climate envelope models are usually trained using data on contemporary climate, we lack systematic comparisons of model performance and predictions across alternative climate data sets available for model training. Here, we seek to fill that gap by comparing variability in predictions between two contemporary climate data sets to variability in spatial predictions among three alternative projections of future climate. Overall, correlations between monthly temperature and precipitation variables were very high for both contemporary and future data. Model performance varied across algorithms, but not between two alternative contemporary climate data sets. Spatial predictions varied more among alternative general-circulation models describing future climate conditions than between contemporary climate data sets. However, we did find that climate envelope models with low Cohen's kappa scores made more discrepant spatial predictions between climate data sets for the contemporary period than did models with high Cohen's kappa scores. We suggest conservation planners evaluate multiple performance metrics and be aware of the importance of differences in initial conditions for spatial predictions from climate envelope models.
K. S. Reddy; P Karthikeyan
2010-01-01
A model to predict the effective thermal conductivity of heterogeneous materials is proposed based on unit cell approach. The model is combined with four fundamental effective thermal conductivity models (Parallel, Series, Maxwell-Eucken-I, and Maxwell-Eucken-II) to evolve a unifying equation for the estimation of effective thermal conductivity of porous and nonporous food materials. The effect of volume fraction (ν) on the structure composition factor (ψ) of the food materials is studied. Th...
Bekele, Rahel; McPherson, Maggie
2011-01-01
This research work presents a Bayesian Performance Prediction Model that was created in order to determine the strength of personality traits in predicting the level of mathematics performance of high school students in Addis Ababa. It is an automated tool that can be used to collect information from students for the purpose of effective group…
Refinement of the Arc-Habcap model to predict habitat effectiveness for elk
Lakhdar Benkobi; Mark A. Rumble; Gary C. Brundige; Joshua J. Millspaugh
2004-01-01
Wildlife habitat modeling is increasingly important for managers who need to assess the effects of land management activities. We evaluated the performance of a spatially explicit deterministic habitat model (Arc-Habcap) that predicts habitat effectiveness for elk. We used five years of radio-telemetry locations of elk from Custer State Park (CSP), South Dakota, to...
The effects of model and data complexity on predictions from species distributions models
DEFF Research Database (Denmark)
García-Callejas, David; Bastos, Miguel
2016-01-01
by their geometrical properties. Tests involved analysis of models' ability to predict virtual species distributions in the same region and the same time as used for training the models, and to project distributions in different times under climate change. Of the eight species distribution models analyzed five (Random...
Directory of Open Access Journals (Sweden)
Dr. Kamal Mohammed Alhendawi
2018-02-01
Full Text Available The information systems (IS assessment studies have still used the commonly traditional tools such as questionnaires in evaluating the dependent variables and specially effectiveness of systems. Artificial neural networks have been recently accepted as an effective alternative tool for modeling the complicated systems and widely used for forecasting. A very few is known about the employment of Artificial Neural Network (ANN in the prediction IS effectiveness. For this reason, this study is considered as one of the fewest studies to investigate the efficiency and capability of using ANN for forecasting the user perceptions towards IS effectiveness where MATLAB is utilized for building and training the neural network model. A dataset of 175 subjects collected from international organization are utilized for ANN learning where each subject consists of 6 features (5 quality factors as inputs and one Boolean output. A percentage of 75% o subjects are used in the training phase. The results indicate an evidence on the ANN models has a reasonable accuracy in forecasting the IS effectiveness. For prediction, ANN with PURELIN (ANNP and ANN with TANSIG (ANNTS transfer functions are used. It is found that both two models have a reasonable prediction, however, the accuracy of ANNTS model is better than ANNP model (88.6% and 70.4% respectively. As the study proposes a new model for predicting IS dependent variables, it could save the considerably high cost that might be spent in sample data collection in the quantitative studies in the fields science, management, education, arts and others.
Directory of Open Access Journals (Sweden)
K. S. Reddy
2010-01-01
Full Text Available A model to predict the effective thermal conductivity of heterogeneous materials is proposed based on unit cell approach. The model is combined with four fundamental effective thermal conductivity models (Parallel, Series, Maxwell-Eucken-I, and Maxwell-Eucken-II to evolve a unifying equation for the estimation of effective thermal conductivity of porous and nonporous food materials. The effect of volume fraction (ν on the structure composition factor (ψ of the food materials is studied. The models are compared with the experimental data of various foods at the initial freezing temperature. The effective thermal conductivity estimated by the Maxwell-Eucken-I + Present model shows good agreement with the experimental data with a minimum average deviation of ±8.66% and maximum deviation of ±42.76% of Series + Present Model. The combined models have advantages over other empirical and semiempirical models.
An Analytical Model for Fatigue Crack Propagation Prediction with Overload Effect
Directory of Open Access Journals (Sweden)
Shan Jiang
2014-01-01
Full Text Available In this paper a theoretical model was developed to predict the fatigue crack growth behavior under the constant amplitude loading with single overload. In the proposed model, crack growth retardation was accounted for by using crack closure and plastic zone. The virtual crack annealing model modified by Bauschinger effect was used to calculate the crack closure level in the outside of retardation effect region. And the Dugdale plastic zone model was employed to estimate the size of retardation effect region. A sophisticated equation was developed to calculate the crack closure variation during the retardation area. Model validation was performed in D16 aluminum alloy and 350WT steel specimens subjected to constant amplitude load with single or multiple overloads. The predictions of the proposed model were contrasted with experimental data, and fairly good agreements were observed.
Meta-analysis of choice set generation effects on route choice model estimates and predictions
DEFF Research Database (Denmark)
Prato, Carlo Giacomo
2012-01-01
are applied for model estimation and results are compared to the ‘true model estimates’. Last, predictions from the simulation of models estimated with objective choice sets are compared to the ‘postulated predicted routes’. A meta-analytical approach allows synthesizing the effect of judgments......Large scale applications of behaviorally realistic transport models pose several challenges to transport modelers on both the demand and the supply sides. On the supply side, path-based solutions to the user assignment equilibrium problem help modelers in enhancing the route choice behavior...... modeling, but require them to generate choice sets by selecting a path generation technique and its parameters according to personal judgments. This paper proposes a methodology and an experimental setting to provide general indications about objective judgments for an effective route choice set generation...
Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.
Martínez, C A; Khare, K; Rahman, S; Elzo, M A
2017-10-01
Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.
A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine
Ramakrishnan, Sridhar; Wesensten, Nancy J.; Kamimori, Gary H.; Moon, James E.; Balkin, Thomas J.; Reifman, Jaques
2016-01-01
Study Objectives: Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. Methods: We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). Results: The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. Conclusions: The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. Citation: Ramakrishnan S, Wesensten NJ, Kamimori GH, Moon JE, Balkin TJ, Reifman J. A unified model of performance for predicting the effects of sleep and caffeine. SLEEP 2016;39(10):1827–1841. PMID:27397562
The Predictive Effect of Big Five Factor Model on Social Reactivity ...
African Journals Online (AJOL)
The Predictive Effect of Big Five Factor Model on Social Reactivity among Adolescents in Cross River State, Nigeria: Personality Assessment and Basis for Counselling. ... Alternatively, you can download the PDF file directly to your computer, from where it can be opened using a PDF reader. To download the PDF, click the ...
Directory of Open Access Journals (Sweden)
Fitri Yakub
2016-01-01
Full Text Available We present a comparative study of model predictive control approaches of two-wheel steering, four-wheel steering, and a combination of two-wheel steering with direct yaw moment control manoeuvres for path-following control in autonomous car vehicle dynamics systems. Single-track mode, based on a linearized vehicle and tire model, is used. Based on a given trajectory, we drove the vehicle at low and high forward speeds and on low and high road friction surfaces for a double-lane change scenario in order to follow the desired trajectory as close as possible while rejecting the effects of wind gusts. We compared the controller based on both simple and complex bicycle models without and with the roll vehicle dynamics for different types of model predictive control manoeuvres. The simulation result showed that the model predictive control gave a better performance in terms of robustness for both forward speeds and road surface variation in autonomous path-following control. It also demonstrated that model predictive control is useful to maintain vehicle stability along the desired path and has an ability to eliminate the crosswind effect.
Moeck, Christian; Von Freyberg, Jana; Schrimer, Maria
2016-04-01
An important question in recharge impact studies is how model choice, structure and calibration period affect recharge predictions. It is still unclear if a certain model type or structure is less affected by running the model on time periods with different hydrological conditions compared to the calibration period. This aspect, however, is crucial to ensure reliable predictions of groundwater recharge. In this study, we quantify and compare the effect of groundwater recharge model choice, model parametrization and calibration period in a systematic way. This analysis was possible thanks to a unique data set from a large-scale lysimeter in a pre-alpine catchment where daily long-term recharge rates are available. More specifically, the following issues are addressed: We systematically evaluate how the choice of hydrological models influences predictions of recharge. We assess how different parameterizations of models due to parameter non-identifiability affect predictions of recharge by applying a Monte Carlo approach. We systematically assess how the choice of calibration periods influences predictions of recharge within a differential split sample test focusing on the model performance under extreme climatic and hydrological conditions. Results indicate that all applied models (simple lumped to complex physically based models) were able to simulate the observed recharge rates for five different calibration periods. However, there was a marked impact of the calibration period when the complete 20 years validation period was simulated. Both, seasonal and annual differences between simulated and observed daily recharge rates occurred when the hydrological conditions were different to the calibration period. These differences were, however, less distinct for the physically based models, whereas the simpler models over- or underestimate the observed recharge depending on the considered season. It is, however, possible to reduce the differences for the simple models by
Cultural Resource Predictive Modeling
2017-10-01
refining formal, inductive predictive models is the quality of the archaeological and environmental data. To build models efficiently, relevant...geomorphology, and historic information . Lessons Learned: The original model was focused on the identification of prehistoric resources. This...system but uses predictive modeling informally . For example, there is no probability for buried archaeological deposits on the Burton Mesa, but there is
A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine.
Ramakrishnan, Sridhar; Wesensten, Nancy J; Kamimori, Gary H; Moon, James E; Balkin, Thomas J; Reifman, Jaques
2016-10-01
Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% sleep loss conditions than for more severe cases. Overall, accounting for the effects of caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. © 2016 Associated Professional Sleep Societies, LLC.
Predictive modeling of complications.
Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P
2016-09-01
Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.
Energy Technology Data Exchange (ETDEWEB)
Jack Istok; Melora Park; James McKinley; Chongxuan Liu; Lee Krumholz; Anne Spain; Aaron Peacock; Brett Baldwin
2007-04-19
The overall goal of this project is to develop and test a thermodynamic network model for predicting the effects of substrate additions and environmental perturbations on microbial growth, community composition and system geochemistry. The hypothesis is that a thermodynamic analysis of the energy-yielding growth reactions performed by defined groups of microorganisms can be used to make quantitative and testable predictions of the change in microbial community composition that will occur when a substrate is added to the subsurface or when environmental conditions change.
Modelling the cutting edge radius size effect for force prediction in micro milling
DEFF Research Database (Denmark)
Bissacco, Giuliano; Hansen, Hans Nørgaard; Jan, Slunsky
2008-01-01
This paper presents a theoretical model for cutting force prediction in micro milling, taking into account the cutting edge radius size effect, the tool run out and the deviation of the chip flow angle from the inclination angle. A parameterization according to the uncut chip thickness to cutting...... edge radius ratio is used for the parameters involved in the force calculation. The model was verified by means of cutting force measurements in micro milling. The results show good agreement between predicted and measured forces. It is also demonstrated that the use of the Stabler's rule...... is a reasonable approximation and that micro end mill run out is effectively compensated by the deflections induced by the cutting forces....
Bayerstadler, Andreas; Benstetter, Franz; Heumann, Christian; Winter, Fabian
2014-09-01
Predictive Modeling (PM) techniques are gaining importance in the worldwide health insurance business. Modern PM methods are used for customer relationship management, risk evaluation or medical management. This article illustrates a PM approach that enables the economic potential of (cost-) effective disease management programs (DMPs) to be fully exploited by optimized candidate selection as an example of successful data-driven business management. The approach is based on a Generalized Linear Model (GLM) that is easy to apply for health insurance companies. By means of a small portfolio from an emerging country, we show that our GLM approach is stable compared to more sophisticated regression techniques in spite of the difficult data environment. Additionally, we demonstrate for this example of a setting that our model can compete with the expensive solutions offered by professional PM vendors and outperforms non-predictive standard approaches for DMP selection commonly used in the market.
Ma, Zhuanglin; Zhang, Honglu; Chien, Steven I-Jy; Wang, Jin; Dong, Chunjiao
2017-01-01
To investigate the relationship between crash frequency and potential influence factors, the accident data for events occurring on a 50km long expressway in China, including 567 crash records (2006-2008), were collected and analyzed. Both the fixed-length and the homogeneous longitudinal grade methods were applied to divide the study expressway section into segments. A negative binomial (NB) model and a random effect negative binomial (RENB) model were developed to predict crash frequency. The parameters of both models were determined using the maximum likelihood (ML) method, and the mixed stepwise procedure was applied to examine the significance of explanatory variables. Three explanatory variables, including longitudinal grade, road width, and ratio of longitudinal grade and curve radius (RGR), were found as significantly affecting crash frequency. The marginal effects of significant explanatory variables to the crash frequency were analyzed. The model performance was determined by the relative prediction error and the cumulative standardized residual. The results show that the RENB model outperforms the NB model. It was also found that the model performance with the fixed-length segment method is superior to that with the homogeneous longitudinal grade segment method. Copyright © 2016. Published by Elsevier Ltd.
Watt, James; Webster, Thomas F; Schlezinger, Jennifer J
2016-09-01
The vast array of potential environmental toxicant combinations necessitates the development of efficient strategies for predicting toxic effects of mixtures. Current practices emphasize the use of concentration addition to predict joint effects of endocrine disrupting chemicals in coexposures. Generalized concentration addition (GCA) is one such method for predicting joint effects of coexposures to chemicals and has the advantage of allowing for mixture components to have differences in efficacy (ie, dose-response curve maxima). Peroxisome proliferator-activated receptor gamma (PPARγ) is a nuclear receptor that plays a central role in regulating lipid homeostasis, insulin sensitivity, and bone quality and is the target of an increasing number of environmental toxicants. Here, we tested the applicability of GCA in predicting mixture effects of therapeutic (rosiglitazone and nonthiazolidinedione partial agonist) and environmental PPARγ ligands (phthalate compounds identified using EPA's ToxCast database). Transcriptional activation of human PPARγ1 by individual compounds and mixtures was assessed using a peroxisome proliferator response element-driven luciferase reporter. Using individual dose-response parameters and GCA, we generated predictions of PPARγ activation by the mixtures, and we compared these predictions with the empirical data. At high concentrations, GCA provided a better estimation of the experimental response compared with 3 alternative models: toxic equivalency factor, effect summation and independent action. These alternatives provided reasonable fits to the data at low concentrations in this system. These experiments support the implementation of GCA in mixtures analysis with endocrine disrupting compounds and establish PPARγ as an important target for further studies of chemical mixtures. © The Author 2016. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e
Predicting the Effects of Interventions: A Tutorial on the Disequilibrium Model.
Jacobs, Kenneth W; Morford, Zachary H; King, James E; Hayes, Linda J
2017-06-01
The disequilibrium approach to reinforcement and punishment, derived from the probability-differential hypothesis and response deprivation hypothesis, provides a number of potentially useful mathematical models for practitioners. The disequilibrium approach and its accompanying models have proven effective in the prediction and control of behavior, yet they have not been fully espoused and integrated into clinical practice. The purpose of this tutorial is to detail the disequilibrium approach and adapt its mathematical models for use as a tool in applied settings. The disequilibrium models specify how to arrange contingencies and predict the effects of those contingencies. We aggregate these models, and provide them as a single tool, in the form of a Microsoft Excel® spreadsheet that calculates the direction and magnitude of behavior change based on baseline measures and a practitioner's choice of intervention parameters. How practitioners take baseline measures and select intervention parameters in accordance with disequilibrium models is explicated. The proposed tool can be accessed and downloaded for use at https://osf.io/knf7x/.
Archaeological predictive model set.
2015-03-01
This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...
Predictive and Descriptive CoMFA Models: The Effect of Variable Selection.
Sepehri, Bakhtyar; Omidikia, Nematollah; Kompany-Zareh, Mohsen; Ghavami, Raouf
2018-01-01
Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Levy, R.; Mcginness, H.
1976-01-01
Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.
The effects of microphysical parameterization on model predictions of sulfate production in clouds
HEGG, DEAN A.; LARSON, TIMOTHY V.
2011-01-01
Model predictions of sulfate production by an explicit cloud chemistry parameterization are compared with corresponding predictions by a bulk chemistry model. Under conditions of high SO2 and H2O2, the various model predictions are in reasonable agreement. For conditions of low H2O2, the explicit microphysical model predicts sulfate production as much as 30 times higher than the bulk model, though more commonly the difference is of the order of a factor of 3. The differences arise because of ...
The effect of scaling physiological cross-sectional area on musculoskeletal model predictions.
Bolsterlee, Bart; Vardy, Alistair N; van der Helm, Frans C T; Veeger, H E J DirkJan
2015-07-16
Personalisation of model parameters is likely to improve biomechanical model predictions and could allow models to be used for subject- or patient-specific applications. This study evaluates the effect of personalising physiological cross-sectional areas (PCSA) in a large-scale musculoskeletal model of the upper extremity. Muscle volumes obtained from MRI were used to scale PCSAs of five subjects, for whom the maximum forces they could exert in six different directions on a handle held by the hand were also recorded. The effect of PCSA scaling was evaluated by calculating the lowest maximum muscle stress (σmax, a constant for human skeletal muscle) required by the model to reproduce these forces. When the original cadaver-based PCSA-values were used, strongly different between-subject σmax-values were found (σmax=106.1±39.9 N cm(-2)). A relatively simple, uniform scaling routine reduced this variation substantially (σmax=69.4±9.4 N cm(-2)) and led to similar results to when a more detailed, muscle-specific scaling routine was used (σmax=71.2±10.8 N cm(-2)). Using subject-specific PCSA values to simulate an shoulder abduction task changed muscle force predictions for the subscapularis and the pectoralis major on average by 33% and 21%, respectively, but was scaling. We conclude that individualisation of the model's strength can most easily be done by scaling PCSA with a single factor that can be derived from muscle volume data or, alternatively, from maximum force measurements. However, since PCSA scaling only marginally changed muscle and joint contact force predictions for submaximal tasks, the need for PCSA scaling remains debatable. Copyright © 2015 Elsevier Ltd. All rights reserved.
Urbanization Impacts on Mammals across Urban-Forest Edges and a Predictive Model of Edge Effects
Villaseñor, Nélida R.; Driscoll, Don A.; Escobar, Martín A. H.; Gibbons, Philip; Lindenmayer, David B.
2014-01-01
With accelerating rates of urbanization worldwide, a better understanding of ecological processes at the wildland-urban interface is critical to conserve biodiversity. We explored the effects of high and low-density housing developments on forest-dwelling mammals. Based on habitat characteristics, we expected a gradual decline in species abundance across forest-urban edges and an increased decline rate in higher contrast edges. We surveyed arboreal mammals in sites of high and low housing density along 600 m transects that spanned urban areas and areas turn on adjacent native forest. We also surveyed forest controls to test whether edge effects extended beyond our edge transects. We fitted models describing richness, total abundance and individual species abundance. Low-density housing developments provided suitable habitat for most arboreal mammals. In contrast, high-density housing developments had lower species richness, total abundance and individual species abundance, but supported the highest abundances of an urban adapter (Trichosurus vulpecula). We did not find the predicted gradual decline in species abundance. Of four species analysed, three exhibited no response to the proximity of urban boundaries, but spilled over into adjacent urban habitat to differing extents. One species (Petaurus australis) had an extended negative response to urban boundaries, suggesting that urban development has impacts beyond 300 m into adjacent forest. Our empirical work demonstrates that high-density housing developments have negative effects on both community and species level responses, except for one urban adapter. We developed a new predictive model of edge effects based on our results and the literature. To predict animal responses across edges, our framework integrates for first time: (1) habitat quality/preference, (2) species response with the proximity to the adjacent habitat, and (3) spillover extent/sensitivity to adjacent habitat boundaries. This framework will
Urbanization impacts on mammals across urban-forest edges and a predictive model of edge effects.
Villaseñor, Nélida R; Driscoll, Don A; Escobar, Martín A H; Gibbons, Philip; Lindenmayer, David B
2014-01-01
With accelerating rates of urbanization worldwide, a better understanding of ecological processes at the wildland-urban interface is critical to conserve biodiversity. We explored the effects of high and low-density housing developments on forest-dwelling mammals. Based on habitat characteristics, we expected a gradual decline in species abundance across forest-urban edges and an increased decline rate in higher contrast edges. We surveyed arboreal mammals in sites of high and low housing density along 600 m transects that spanned urban areas and areas turn on adjacent native forest. We also surveyed forest controls to test whether edge effects extended beyond our edge transects. We fitted models describing richness, total abundance and individual species abundance. Low-density housing developments provided suitable habitat for most arboreal mammals. In contrast, high-density housing developments had lower species richness, total abundance and individual species abundance, but supported the highest abundances of an urban adapter (Trichosurus vulpecula). We did not find the predicted gradual decline in species abundance. Of four species analysed, three exhibited no response to the proximity of urban boundaries, but spilled over into adjacent urban habitat to differing extents. One species (Petaurus australis) had an extended negative response to urban boundaries, suggesting that urban development has impacts beyond 300 m into adjacent forest. Our empirical work demonstrates that high-density housing developments have negative effects on both community and species level responses, except for one urban adapter. We developed a new predictive model of edge effects based on our results and the literature. To predict animal responses across edges, our framework integrates for first time: (1) habitat quality/preference, (2) species response with the proximity to the adjacent habitat, and (3) spillover extent/sensitivity to adjacent habitat boundaries. This framework will
Urbanization impacts on mammals across urban-forest edges and a predictive model of edge effects.
Directory of Open Access Journals (Sweden)
Nélida R Villaseñor
Full Text Available With accelerating rates of urbanization worldwide, a better understanding of ecological processes at the wildland-urban interface is critical to conserve biodiversity. We explored the effects of high and low-density housing developments on forest-dwelling mammals. Based on habitat characteristics, we expected a gradual decline in species abundance across forest-urban edges and an increased decline rate in higher contrast edges. We surveyed arboreal mammals in sites of high and low housing density along 600 m transects that spanned urban areas and areas turn on adjacent native forest. We also surveyed forest controls to test whether edge effects extended beyond our edge transects. We fitted models describing richness, total abundance and individual species abundance. Low-density housing developments provided suitable habitat for most arboreal mammals. In contrast, high-density housing developments had lower species richness, total abundance and individual species abundance, but supported the highest abundances of an urban adapter (Trichosurus vulpecula. We did not find the predicted gradual decline in species abundance. Of four species analysed, three exhibited no response to the proximity of urban boundaries, but spilled over into adjacent urban habitat to differing extents. One species (Petaurus australis had an extended negative response to urban boundaries, suggesting that urban development has impacts beyond 300 m into adjacent forest. Our empirical work demonstrates that high-density housing developments have negative effects on both community and species level responses, except for one urban adapter. We developed a new predictive model of edge effects based on our results and the literature. To predict animal responses across edges, our framework integrates for first time: (1 habitat quality/preference, (2 species response with the proximity to the adjacent habitat, and (3 spillover extent/sensitivity to adjacent habitat boundaries. This
Zephyr - the prediction models
DEFF Research Database (Denmark)
Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg
2001-01-01
utilities as partners and users. The new models are evaluated for five wind farms in Denmark as well as one wind farm in Spain. It is shown that the predictions based on conditional parametric models are superior to the predictions obatined by state-of-the-art parametric models.......This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Danish...
McCormick, Keith; Wei, Bowen
2017-01-01
IBM SPSS Modeler allows quick, efficient predictive analytics and insight building from your data, and is a popularly used data mining tool. This book will guide you through the data mining process, and presents relevant statistical methods which are used to build predictive models and conduct other analytic tasks using IBM SPSS Modeler. From ...
McCormick, Keith; Wei, Bowen
2017-01-01
IBM SPSS Modeler allows quick, efficient predictive analytics and insight building from your data, and is a popularly used data mining tool. This book will guide you through the data mining process, and presents relevant statistical methods which are used to build predictive models and conduct other analytic tasks using IBM SPSS Modeler. From ...
Potter, Gail E; Smieszek, Timo; Sailer, Kerstin
2015-09-01
Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0-5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models.
Effects of soil data resolution on SWAT model stream flow and water quality predictions.
Geza, Mengistu; McCray, John E
2008-08-01
The prediction accuracy of agricultural nonpoint source pollution models such as Soil and Water Assessment Tool (SWAT) depends on how well model input spatial parameters describe the characteristics of the watershed. The objective of this study was to assess the effects of different soil data resolutions on stream flow, sediment and nutrient predictions when used as input for SWAT. SWAT model predictions were compared for the two US Department of Agriculture soil databases with different resolution, namely the State Soil Geographic database (STATSGO) and the Soil Survey Geographic database (SSURGO). Same number of sub-basins was used in the watershed delineation. However, the number of HRUs generated when STATSGO and SSURGO soil data were used is 261 and 1301, respectively. SSURGO, with the highest spatial resolution, has 51 unique soil types in the watershed distributed in 1301 HRUs, while STATSGO has only three distributed in 261 HRUS. As a result of low resolution STATSGO assigns a single classification to areas that may have different soil types if SSURGO were used. SSURGO included Hydrologic Response Units (HRUs) with soil types that were generalized to one soil group in STATSGO. The difference in the number and size of HRUs also has an effect on sediment yield parameters (slope and slope length). Thus, as a result of the discrepancies in soil type and size of HRUs stream flow predicted was higher when SSURGO was used compared to STATSGO. SSURGO predicted less stream loading than STATSGO in terms of sediment and sediment-attached nutrients components, and vice versa for dissolved nutrients. When compared to mean daily measured flow, STATSGO performed better relative to SSURGO before calibration. SSURGO provided better results after calibration as evaluated by R(2) value (0.74 compared to 0.61 for STATSGO) and the Nash-Sutcliffe coefficient of Efficiency (NSE) values (0.70 and 0.61 for SSURGO and STATSGO, respectively) although both are in the same satisfactory
Inverse and Predictive Modeling
Energy Technology Data Exchange (ETDEWEB)
Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-09-27
The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.
Model predictions and analysis of enhanced biological effectiveness at low dose rates
International Nuclear Information System (INIS)
Watt, D.E.; Sykes, C.E.; Younis, A.-R.S.
1988-01-01
A severe challenge to all models purporting to describe the biological effects of ionizing radiation has arisen with the discovery of two phenomena: the anomalous trend with dose rate of the frequency of neoplastic transformation of mammalian cells and the apparent excessive damaging power of electron-capture radionuclides when incorporated into cell nuclei. A new model is proposed which predicts and enables interpretation of these phenomena. Radiation effectiveness is found to be expressible absolutely in terms of the geometrical cross-sectional area of the radiosensitive sites. The duration of the irradiation, the mean free path for ionization, the influence of particles in the slowing-down spectrum perrtaining in the medium and two collective time factors determining the mean repair rate and the mean lifetime of unidentified reactive chemical species [pt
Lopresto, Vanni; Pinto, Rosanna; Farina, Laura; Cavagnaro, Marta
2017-08-01
Microwave thermal ablation (MTA) therapy for cancer treatments relies on the absorption of electromagnetic energy at microwave frequencies to induce a very high and localized temperature increase, which causes an irreversible thermal damage in the target zone. Treatment planning in MTA is based on experimental observations of ablation zones in ex vivo tissue, while predicting the treatment outcomes could be greatly improved by reliable numerical models. In this work, a fully dynamical simulation model is exploited to look at effects of temperature-dependent variations in the dielectric and thermal properties of the targeted tissue on the prediction of the temperature increase and the extension of the thermally coagulated zone. In particular, the influence of measurement uncertainty of tissue parameters on the numerical results is investigated. Numerical data were compared with data from MTA experiments performed on ex vivo bovine liver tissue at 2.45GHz, with a power of 60W applied for 10min. By including in the simulation model an uncertainty budget (CI=95%) of ±25% in the properties of the tissue due to inaccuracy of measurements, numerical results were achieved in the range of experimental data. Obtained results also showed that the specific heat especially influences the extension of the thermally coagulated zone, with an increase of 27% in length and 7% in diameter when a variation of -25% is considered with respect to the value of the reference simulation model. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Melanoma risk prediction models
Directory of Open Access Journals (Sweden)
Nikolić Jelena
2014-01-01
Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were
The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling
Dahl, Milo D.; Khavaran, Abbas
2010-01-01
Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.
New Indicated Mean Effective Pressure (IMEP) model for predicting crankshaft movement
International Nuclear Information System (INIS)
Omran, Rabih; Younes, Rafic; Champoussin, Jean-Claude; Outbib, Rachid
2011-01-01
Highlights: → IMEP is essential to estimate the indicated torque in internal combustion engine. → We proposed model which describes the IMEP-Low pressure and the IMEP-High pressure. → We studied the evolution of the IMEP with respect to the engine's variables. → We deduced the variables of influence that can be used to develop the models. → The IMEP model is compared to transient experimental New European Driving Cycle. - Abstract: Indicated Mean Effective Pressure models (IMEP) are essential to estimate the indicated torque in internal combustion engine; they also provide important information about the mechanical efficiency of the engine thermodynamic cycle which describes the conversion of the fuel combustion energy into mechanical work. In the past, many researches were made to improve the IMEP prediction and measurement techniques at different engine operating conditions. In this paper, we proposed a detailed IMEP model which separately describes the IMEP-Low pressure and the IMEP-High pressure of a modern diesel engine; the IMEP is the direct subtraction result between these two variables. We firstly studied the evolution of the IMEP HP and IMEP LP with respect to the engine's variables and then we deduced the variables of influence and the form of the equations that can be used to develop the models. Finally, the models' coefficients were determined based on experimental data collected on a steady state test bench and using the least square regression method. In addition, the IMEP HP model results were compared to transient experimental data collected on a chassis dynamometer test bench; the model results are in excellent agreement with the experimental data.
Fog modelling during the ParisFog campaign: predictive approach and spatial heterogeneity effect
International Nuclear Information System (INIS)
Zhang, Xiaojing
2010-01-01
In fog or low clouds modeling, the accurate comprehension of the interaction among the turbulence, the microphysics and the radiation is still an important issue in improvement of numerical prediction quality. The improvement of fog modeling is important both in forecasting in transportation and in industrial domain by reason of their discharges atmospheric (cooling tower, smog...). The 1D version of Code-Saturne has been used for the numerical simulation with the observational data from the ParisFog campaign, which took place at the SIRTA site during 2006-2007 winter. The comparison between the simulation and observation shows that the model is able to reproduce correctly the fog evolution from its formation to its dissipation. The sensitivity analysis of the behavior of the different parameterizations shows that the fog dynamic is sensible to the turbulence closure, the fog water content to the sedimentation processes and the fog droplet spectrum to the nucleation scheme. The performance of a long-period simulation in forecasting mode shows that the robustness of the model and the contribution of the coupling by nudging and a mesoscale model in 36 hours advance. The 3D version of Code-Saturne allows us to study the effect of spatial heterogeneity on the fog formation. Firstly, the simulations have been performed within a homogeneous horizontal domain with RANS mode. And then, the surface roughness in different type of surface and the building area will be taken into account. (author) [fr
A mathematical model to predict the effect of heat recovery on the wastewater temperature in sewers.
Dürrenmatt, David J; Wanner, Oskar
2014-01-01
Raw wastewater contains considerable amounts of energy that can be recovered by means of a heat pump and a heat exchanger installed in the sewer. The technique is well established, and there are approximately 50 facilities in Switzerland, many of which have been successfully using this technique for years. The planning of new facilities requires predictions of the effect of heat recovery on the wastewater temperature in the sewer because altered wastewater temperatures may cause problems for the biological processes used in wastewater treatment plants and receiving waters. A mathematical model is presented that calculates the discharge in a sewer conduit and the spatial profiles and dynamics of the temperature in the wastewater, sewer headspace, pipe, and surrounding soil. The model was implemented in the simulation program TEMPEST and was used to evaluate measured time series of discharge and temperatures. It was found that the model adequately reproduces the measured data and that the temperature and thermal conductivity of the soil and the distance between the sewer pipe and undisturbed soil are the most sensitive model parameters. The temporary storage of heat in the pipe wall and the exchange of heat between wastewater and the pipe wall are the most important processes for heat transfer. The model can be used as a tool to determine the optimal site for heat recovery and the maximal amount of extractable heat. Copyright © 2013 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Drover, Damion, Ryan
2011-12-01
One of the largest exports in the Southeast U.S. is forest products. Interest in biofuels using forest biomass has increased recently, leading to more research into better forest management BMPs. The USDA Forest Service, along with the Oak Ridge National Laboratory, University of Georgia and Oregon State University are researching the impacts of intensive forest management for biofuels on water quality and quantity at the Savannah River Site in South Carolina. Surface runoff of saturated areas, transporting excess nutrients and contaminants, is a potential water quality issue under investigation. Detailed maps of variable source areas and soil characteristics would therefore be helpful prior to treatment. The availability of remotely sensed and computed digital elevation models (DEMs) and spatial analysis tools make it easy to calculate terrain attributes. These terrain attributes can be used in models to predict saturated areas or other attributes in the landscape. With laser altimetry, an area can be flown to produce very high resolution data, and the resulting data can be resampled into any resolution of DEM desired. Additionally, there exist many maps that are in various resolutions of DEM, such as those acquired from the U.S. Geological Survey. Problems arise when using maps derived from different resolution DEMs. For example, saturated areas can be under or overestimated depending on the resolution used. The purpose of this study was to examine the effects of DEM resolution on the calculation of topographic wetness indices used to predict variable source areas of saturation, and to find the best resolutions to produce prediction maps of soil attributes like nitrogen, carbon, bulk density and soil texture for low-relief, humid-temperate forested hillslopes. Topographic wetness indices were calculated based on the derived terrain attributes, slope and specific catchment area, from five different DEM resolutions. The DEMs were resampled from LiDAR, which is a
Directory of Open Access Journals (Sweden)
Mindy M Syfert
Full Text Available Species distribution models (SDMs trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a choosing to correct for geographical sampling bias and (b using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt. In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.
Syfert, Mindy M; Smith, Matthew J; Coomes, David A
2013-01-01
Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.
Directory of Open Access Journals (Sweden)
Francesco Cozzoli
Full Text Available Human infrastructures can modify ecosystems, thereby affecting the occurrence and spatial distribution of organisms, as well as ecosystem functionality. Sustainable development requires the ability to predict responses of species to anthropogenic pressures. We investigated the large scale, long term effect of important human alterations of benthic habitats with an integrated approach combining engineering and ecological modelling. We focused our analysis on the Oosterschelde basin (The Netherlands, which was partially embanked by a storm surge barrier (Oosterscheldekering, 1986. We made use of 1 a prognostic (numerical environmental (hydrodynamic model and 2 a novel application of quantile regression to Species Distribution Modeling (SDM to simulate both the realized and potential (habitat suitability abundance of four macrozoobenthic species: Scoloplos armiger, Peringia ulvae, Cerastoderma edule and Lanice conchilega. The analysis shows that part of the fluctuations in macrozoobenthic biomass stocks during the last decades is related to the effect of the coastal defense infrastructures on the basin morphology and hydrodynamics. The methodological framework we propose is particularly suitable for the analysis of large abundance datasets combined with high-resolution environmental data. Our analysis provides useful information on future changes in ecosystem functionality induced by human activities.
International Nuclear Information System (INIS)
Massoud, J.P.; Bugat, St.; Marini, B.; Lidbury, D.; Van Dyck, St.; Debarberis, L.
2008-01-01
Full text of publication follows. In nuclear PWRs, materials undergo degradation due to severe irradiation conditions that may limit their operational life. Utilities operating these reactors must quantify the aging and the potential degradations of reactor pressure vessels and also of internal structures to ensure safe and reliable plant operation. The EURATOM 6. Framework Integrated Project PERFECT (Prediction of Irradiation Damage Effects in Reactor Components) addresses irradiation damage in RPV materials and components by multi-scale modelling. This state-of-the-art approach offers potential advantages over the conventional empirical methods used in current practice of nuclear plant lifetime management. Launched in January 2004, this 48-month project is focusing on two main components of nuclear power plants which are subject to irradiation damage: the ferritic steel reactor pressure vessel and the austenitic steel internals. This project is also an opportunity to integrate the fragmented research and experience that currently exists within Europe in the field of numerical simulation of radiation damage and creates the links with international organisations involved in similar projects throughout the world. Continuous progress in the physical understanding of the phenomena involved in irradiation damage and continuous progress in computer sciences make possible the development of multi-scale numerical tools able to simulate the effects of irradiation on materials microstructure. The consequences of irradiation on mechanical and corrosion properties of materials are also tentatively modelled using such multi-scale modelling. But it requires to develop different mechanistic models at different levels of physics and engineering and to extend the state of knowledge in several scientific fields. And the links between these different kinds of models are particularly delicate to deal with and need specific works. Practically the main objective of PERFECT is to build
Pankatz, Klaus; Kerkweg, Astrid
2015-04-01
The work presented is part of the joint project "DecReg" ("Regional decadal predictability") which is in turn part of the project "MiKlip" ("Decadal predictions"), an effort funded by the German Federal Ministry of Education and Research to improve decadal predictions on a global and regional scale. In MiKlip, one big question is if regional climate modeling shows "added value", i.e. to evaluate, if regional climate models (RCM) produce better results than the driving models. However, the scope of this study is to look more closely at the setup specific details of regional climate modeling. As regional models only simulate a small domain, they have to inherit information about the state of the atmosphere at their lateral boundaries from external data sets. There are many unresolved questions concerning the setup of lateral boundary conditions (LBC). External data sets come from global models or from global reanalysis data-sets. A temporal resolution of six hours is common for this kind of data. This is mainly due to the fact, that storage space is a limiting factor, especially for climate simulations. However, theoretically, the coupling frequency could be as high as the time step of the driving model. Meanwhile, it is unclear if a more frequent update of the LBCs has a significant effect on the climate in the domain of the RCM. The first study examines how the RCM reacts to a higher update frequency. The study is based on a 30 year time slice experiment for three update frequencies of the LBC, namely six hours, one hour and six minutes. The evaluation of means, standard deviations and statistics of the climate in the regional domain shows only small deviations, some statistically significant though, of 2m temperature, sea level pressure and precipitation. The second part of the first study assesses parameters linked to cyclone activity, which is affected by the LBC update frequency. Differences in track density and strength are found when comparing the simulations
Effective high-order solver with thermally perfect gas model for hypersonic heating prediction
International Nuclear Information System (INIS)
Jiang, Zhenhua; Yan, Chao; Yu, Jian; Qu, Feng; Ma, Libin
2016-01-01
Highlights: • Design proper numerical flux for thermally perfect gas. • Line-implicit LUSGS enhances efficiency without extra memory consumption. • Develop unified framework for both second-order MUSCL and fifth-order WENO. • The designed gas model can be applied to much wider temperature range. - Abstract: Effective high-order solver based on the model of thermally perfect gas has been developed for hypersonic heat transfer computation. The technique of polynomial curve fit coupling to thermodynamics equation is suggested to establish the current model and particular attention has been paid to the design of proper numerical flux for thermally perfect gas. We present procedures that unify five-order WENO (Weighted Essentially Non-Oscillatory) scheme in the existing second-order finite volume framework and a line-implicit method that improves the computational efficiency without increasing memory consumption. A variety of hypersonic viscous flows are performed to examine the capability of the resulted high order thermally perfect gas solver. Numerical results demonstrate its superior performance compared to low-order calorically perfect gas method and indicate its potential application to hypersonic heating predictions for real-life problem.
DEFF Research Database (Denmark)
Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael
2013-01-01
and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast...... was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects...
Xiong, Dapeng; Liu, Rongjie; Xiao, Fen; Gao, Xieping
2014-12-01
The core promoters play significant and extensive roles for the initiation and regulation of DNA transcription. The identification of core promoters is one of the most challenging problems yet. Due to the diverse nature of core promoters, the results obtained through existing computational approaches are not satisfactory. None of them considered the potential influence on performance of predictive approach resulted by the interference between neighboring TSSs in TSS clusters. In this paper, we sufficiently considered this main factor and proposed an approach to locate potential TSS clusters according to the correlation of regional profiles of DNA and TSS clusters. On this basis, we further presented a novel computational approach (ProMT) for promoter prediction using Markov chain model and predictive TSS clusters based on structural properties of DNA. Extensive experiments demonstrated that ProMT can significantly improve the predictive performance. Therefore, considering interference between neighboring TSSs is essential for a wider range of promoter prediction.
Effects of modeled tropical sea surface temperature variability on coral reef bleaching predictions
van Hooidonk, R.; Huber, M.
2012-03-01
Future widespread coral bleaching and subsequent mortality has been projected using sea surface temperature (SST) data derived from global, coupled ocean-atmosphere general circulation models (GCMs). While these models possess fidelity in reproducing many aspects of climate, they vary in their ability to correctly capture such parameters as the tropical ocean seasonal cycle and El Niño Southern Oscillation (ENSO) variability. Such weaknesses most likely reduce the accuracy of predicting coral bleaching, but little attention has been paid to the important issue of understanding potential errors and biases, the interaction of these biases with trends, and their propagation in predictions. To analyze the relative importance of various types of model errors and biases in predicting coral bleaching, various intra- and inter-annual frequency bands of observed SSTs were replaced with those frequencies from 24 GCMs 20th century simulations included in the Intergovernmental Panel on Climate Change (IPCC) 4th assessment report. Subsequent thermal stress was calculated and predictions of bleaching were made. These predictions were compared with observations of coral bleaching in the period 1982-2007 to calculate accuracy using an objective measure of forecast quality, the Peirce skill score (PSS). Major findings are that: (1) predictions are most sensitive to the seasonal cycle and inter-annual variability in the ENSO 24-60 months frequency band and (2) because models tend to understate the seasonal cycle at reef locations, they systematically underestimate future bleaching. The methodology we describe can be used to improve the accuracy of bleaching predictions by characterizing the errors and uncertainties involved in the predictions.
Directory of Open Access Journals (Sweden)
Leslie M. Collins
2005-11-01
Full Text Available Cochlear implants can provide partial restoration of hearing, even with limited spectral resolution and loss of fine temporal structure, to severely deafened individuals. Studies have indicated that background noise has significant deleterious effects on the speech recognition performance of cochlear implant patients. This study investigates the effects of noise on speech recognition using acoustic models of two cochlear implant speech processors and several predictive signal-processing-based analyses. The results of a listening test for vowel and consonant recognition in noise are presented and analyzed using the rate of phonemic feature transmission for each acoustic model. Three methods for predicting patterns of consonant and vowel confusion that are based on signal processing techniques calculating a quantitative difference between speech tokens are developed and tested using the listening test results. Results of the listening test and confusion predictions are discussed in terms of comparisons between acoustic models and confusion prediction performance.
Angelieri, Cintia Camila Silva; Adams-Hosking, Christine; Ferraz, Katia Maria Paschoaletto Micchi de Barros; de Souza, Marcelo Pereira; McAlpine, Clive Alexander
2016-01-01
A mosaic of intact native and human-modified vegetation use can provide important habitat for top predators such as the puma (Puma concolor), avoiding negative effects on other species and ecological processes due to cascade trophic interactions. This study investigates the effects of restoration scenarios on the puma's habitat suitability in the most developed Brazilian region (São Paulo State). Species Distribution Models incorporating restoration scenarios were developed using the species' occurrence information to (1) map habitat suitability of pumas in São Paulo State, Southeast, Brazil; (2) test the relative contribution of environmental variables ecologically relevant to the species habitat suitability and (3) project the predicted habitat suitability to future native vegetation restoration scenarios. The Maximum Entropy algorithm was used (Test AUC of 0.84 ± 0.0228) based on seven environmental non-correlated variables and non-autocorrelated presence-only records (n = 342). The percentage of native vegetation (positive influence), elevation (positive influence) and density of roads (negative influence) were considered the most important environmental variables to the model. Model projections to restoration scenarios reflected the high positive relationship between pumas and native vegetation. These projections identified new high suitability areas for pumas (probability of presence >0.5) in highly deforested regions. High suitability areas were increased from 5.3% to 8.5% of the total State extension when the landscapes were restored for ≥ the minimum native vegetation cover rule (20%) established by the Brazilian Forest Code in private lands. This study highlights the importance of a landscape planning approach to improve the conservation outlook for pumas and other species, including not only the establishment and management of protected areas, but also the habitat restoration on private lands. Importantly, the results may inform environmental
Effects of turbulence model selection on the prediction of complex aerodynamic flows
Coakley, T. J.; Bergmann, M. Y.
1979-01-01
Numerical simulations of viscous transonic flow over a circular-arc airfoil and in a diffuser are described. The simulations are made with a new computer program designed to serve as a tool in the development of improved turbulence models for complex flows. The program incorporates zero-, one-, and two-equation eddy viscosity models and includes a variety of subsonic and supersonic boundary conditions. The airfoil flow contains a shock-separated boundary-layer interaction that has resisted previous attempts at simulation. The diffuser flow also contains a shock-boundary-layer interaction, which has not been simulated previously. Calculations using standard turbulence models, developed originally for incompressible unseparated flows, are described. Results indicate that although there are interesting differences in predictions between the various models, none of them predict the flows accurately. Suggestions for improved turbulence models are discussed.
Campbell, William; Ganna, Andrea; Ingelsson, Erik; Janssens, A Cecile J W
2016-01-01
We propose a new measure of assessing the performance of risk models, the area under the prediction impact curve (auPIC), which quantifies the performance of risk models in terms of their average health impact in the population. Using simulated data, we explain how the prediction impact curve (PIC) estimates the percentage of events prevented when a risk model is used to assign high-risk individuals to an intervention. We apply the PIC to the Atherosclerosis Risk in Communities (ARIC) Study to illustrate its application toward prevention of coronary heart disease. We estimated that if the ARIC cohort received statins at baseline, 5% of events would be prevented when the risk model was evaluated at a cutoff threshold of 20% predicted risk compared to 1% when individuals were assigned to the intervention without the use of a model. By calculating the auPIC, we estimated that an average of 15% of events would be prevented when considering performance across the entire interval. We conclude that the PIC is a clinically meaningful measure for quantifying the expected health impact of risk models that supplements existing measures of model performance. Copyright © 2016 Elsevier Inc. All rights reserved.
Refining Sunrise/set Prediction Models by Accounting for the Effects of Refraction
Wilson, Teresa; Bartlett, Jennifer L.
2016-01-01
Current atmospheric models used to predict the times of sunrise and sunset have an error of one to four minutes at mid-latitudes (0° - 55° N/S). At higher latitudes, slight changes in refraction may cause significant discrepancies, including determining even whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols, could significantly improve the standard prediction. Because sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem, we will collect this data using smartphones as part of a citizen science project. This analysis will lead to more complete models that will provide more accurate times for navigators and outdoorsman alike.
Predictive Surface Complexation Modeling
Energy Technology Data Exchange (ETDEWEB)
Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences
2016-11-29
Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO_{2} and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.
Are we ready to predict late effects?
DEFF Research Database (Denmark)
Salz, Talya; Baxi, Shrujal S; Raghunathan, Nirupa
2015-01-01
to patient characteristics, late effects, the prediction model and model evaluation. DATA SYNTHESIS: Across 14 studies identified for review, nine late effects were predicted: erectile dysfunction and urinary incontinence after prostate cancer; arm lymphoedema, psychological morbidity, cardiomyopathy...
The Predictive Effect of Big Five Factor Model on Social Reactivity ...
African Journals Online (AJOL)
The study tested a model of providing a predictive explanation of Big Five Factor on social reactivity among secondary school adolescents of Cross River State, Nigeria. A sample of 200 students randomly selected across 12 public secondary schools in the State participated in the study (120 male and 80 female). Data ...
White, Jeremy T.; Langevin, Christian D.; Hughes, Joseph D.
2010-01-01
Calibration of highly‐parameterized numerical models typically requires explicit Tikhonovtype regularization to stabilize the inversion process. This regularization can take the form of a preferred parameter values scheme or preferred relations between parameters, such as the preferred equality scheme. The resulting parameter distributions calibrate the model to a user‐defined acceptable level of model‐to‐measurement misfit, and also minimize regularization penalties on the total objective function. To evaluate the potential impact of these two regularization schemes on model predictive ability, a dataset generated from a synthetic model was used to calibrate a highly-parameterized variable‐density SEAWAT model. The key prediction is the length of time a synthetic pumping well will produce potable water. A bi‐objective Pareto analysis was used to explicitly characterize the relation between two competing objective function components: measurement error and regularization error. Results of the Pareto analysis indicate that both types of regularization schemes affect the predictive ability of the calibrated model.
Wang, Ming; Li, Zheng; Lee, Eun Young; Lewis, Mechelle M; Zhang, Lijun; Sterling, Nicholas W; Wagner, Daymond; Eslinger, Paul; Du, Guangwei; Huang, Xuemei
2017-09-25
It is challenging for current statistical models to predict clinical progression of Parkinson's disease (PD) because of the involvement of multi-domains and longitudinal data. Past univariate longitudinal or multivariate analyses from cross-sectional trials have limited power to predict individual outcomes or a single moment. The multivariate generalized linear mixed-effect model (GLMM) under the Bayesian framework was proposed to study multi-domain longitudinal outcomes obtained at baseline, 18-, and 36-month. The outcomes included motor, non-motor, and postural instability scores from the MDS-UPDRS, and demographic and standardized clinical data were utilized as covariates. The dynamic prediction was performed for both internal and external subjects using the samples from the posterior distributions of the parameter estimates and random effects, and also the predictive accuracy was evaluated based on the root of mean square error (RMSE), absolute bias (AB) and the area under the receiver operating characteristic (ROC) curve. First, our prediction model identified clinical data that were differentially associated with motor, non-motor, and postural stability scores. Second, the predictive accuracy of our model for the training data was assessed, and improved prediction was gained in particularly for non-motor (RMSE and AB: 2.89 and 2.20) compared to univariate analysis (RMSE and AB: 3.04 and 2.35). Third, the individual-level predictions of longitudinal trajectories for the testing data were performed, with ~80% observed values falling within the 95% credible intervals. Multivariate general mixed models hold promise to predict clinical progression of individual outcomes in PD. The data was obtained from Dr. Xuemei Huang's NIH grant R01 NS060722 , part of NINDS PD Biomarker Program (PDBP). All data was entered within 24 h of collection to the Data Management Repository (DMR), which is publically available ( https://pdbp.ninds.nih.gov/data-management ).
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Effects of lightning on trees: A predictive model based on in situ electrical resistivity.
Gora, Evan M; Bitzer, Phillip M; Burchfield, Jeffrey C; Schnitzer, Stefan A; Yanoviak, Stephen P
2017-10-01
The effects of lightning on trees range from catastrophic death to the absence of observable damage. Such differences may be predictable among tree species, and more generally among plant life history strategies and growth forms. We used field-collected electrical resistivity data in temperate and tropical forests to model how the distribution of power from a lightning discharge varies with tree size and identity, and with the presence of lianas. Estimated heating density (heat generated per volume of tree tissue) and maximum power (maximum rate of heating) from a standardized lightning discharge differed 300% among tree species. Tree size and morphology also were important; the heating density of a hypothetical 10 m tall Alseis blackiana was 49 times greater than for a 30 m tall conspecific, and 127 times greater than for a 30 m tall Dipteryx panamensis . Lianas may protect trees from lightning by conducting electric current; estimated heating and maximum power were reduced by 60% (±7.1%) for trees with one liana and by 87% (±4.0%) for trees with three lianas. This study provides the first quantitative mechanism describing how differences among trees can influence lightning-tree interactions, and how lianas can serve as natural lightning rods for trees.
Candidate Prediction Models and Methods
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik
2005-01-01
This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...
Moor, Helen; Hylander, Kristoffer; Norberg, Jon
2015-01-01
Wetlands provide multiple ecosystem services, the sustainable use of which requires knowledge of the underlying ecological mechanisms. Functional traits, particularly the community-weighted mean trait (CWMT), provide a strong link between species communities and ecosystem functioning. We here combine species distribution modeling and plant functional traits to estimate the direction of change of ecosystem processes under climate change. We model changes in CWMT values for traits relevant to three key services, focusing on the regional species pool in the Norrström area (central Sweden) and three main wetland types. Our method predicts proportional shifts toward faster growing, more productive and taller species, which tend to increase CWMT values of specific leaf area and canopy height, whereas changes in root depth vary. The predicted changes in CWMT values suggest a potential increase in flood attenuation services, a potential increase in short (but not long)-term nutrient retention, and ambiguous outcomes for carbon sequestration.
What do saliency models predict?
Koehler, Kathryn; Guo, Fei; Zhang, Sheng; Eckstein, Miguel P.
2014-01-01
Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments about the most salient image region. Other groups of observers performed a free viewing task, saliency search task, or cued object search task. Behavior on the popular free viewing task was not best predicted by standard saliency models. Instead, the models most accurately predicted the explicit saliency selections and eye movements made while performing saliency judgments. Observers' fixations varied similarly across images for the saliency and free viewing tasks, suggesting that these two tasks are related. The variability of observers' eye movements was modulated by the task (lowest for the object search task and greatest for the free viewing and saliency search tasks) as well as the clutter content of the images. Eye movement variability in saliency search and free viewing might be also limited by inherent variation of what observers consider salient. Our results contribute to understanding the tasks and behavioral measures for which saliency models are best suited as predictors of human behavior, the relationship across various perceptual tasks, and the factors contributing to observer variability in fixational eye movements. PMID:24618107
Lima, E A B F; Oden, J T; Wohlmuth, B; Shahmoradi, A; Hormuth, D A; Yankeelov, T E; Scarabosio, L; Horger, T
2017-12-01
The use of mathematical and computational models for reliable predictions of tumor growth and decline in living organisms is one of the foremost challenges in modern predictive science, as it must cope with uncertainties in observational data, model selection, model parameters, and model inadequacy, all for very complex physical and biological systems. In this paper, large classes of parametric models of tumor growth in vascular tissue are discussed including models for radiation therapy. Observational data is obtained from MRI of a murine model of glioma and observed over a period of about three weeks, with X-ray radiation administered 14.5 days into the experimental program. Parametric models of tumor proliferation and decline are presented based on the balance laws of continuum mixture theory, particularly mass balance, and from accepted biological hypotheses on tumor growth. Among these are new model classes that include characterizations of effects of radiation and simple models of mechanical deformation of tumors. The Occam Plausibility Algorithm (OPAL) is implemented to provide a Bayesian statistical calibration of the model classes, 39 models in all, as well as the determination of the most plausible models in these classes relative to the observational data, and to assess model inadequacy through statistical validation processes. Discussions of the numerical analysis of finite element approximations of the system of stochastic, nonlinear partial differential equations characterizing the model classes, as well as the sampling algorithms for Monte Carlo and Markov chain Monte Carlo (MCMC) methods employed in solving the forward stochastic problem, and in computing posterior distributions of parameters and model plausibilities are provided. The results of the analyses described suggest that the general framework developed can provide a useful approach for predicting tumor growth and the effects of radiation.
Yang, Qhi-xiao; Peng, Si-long; Shan, Peng; Bi, Yi-ming; Tang, Liang; Xie, Qiong
2015-05-01
In the present paper, a new model-based method was proposed for temperature prediction and correction. First, a temperature prediction model was obtained from training samples; then, the temperature of test samples were predicted; and finally, the correction model was used to reduce the nonlinear effects of spectra from temperature variations. Two experiments were used to verify the proposed method, including a water-ethanol mixture experiment and a ternary mixture experiment. The results show that, compared with classic method such as continuous piecewise direct standardization (CPDS), our method is efficient for temperature correction. Furthermore, the temperatures of test samples are not necessary in the proposed method, making it easier to use in real applications.
Smith, Kathleen S.; Ranville, James F.; Adams, M.; Choate, LaDonna M.; Church, Stan E.; Fey, David L.; Wanty, Richard B.; Crock, James G.
2006-01-01
The chemical speciation of metals influences their biological effects. The Biotic Ligand Model (BLM) is a computational approach to predict chemical speciation and acute toxicological effects of metals on aquatic biota. Recently, the U.S. Environmental Protection Agency incorporated the BLM into their regulatory water-quality criteria for copper. Results from three different laboratory copper toxicity tests were compared with BLM predictions for simulated test-waters. This was done to evaluate the ability of the BLM to accurately predict the effects of hardness and concentrations of dissolved organic carbon (DOC) and iron on aquatic toxicity. In addition, we evaluated whether the BLM and the three toxicity tests provide consistent results. Comparison of BLM predictions with two types of Ceriodaphnia dubia toxicity tests shows that there is fairly good agreement between predicted LC50 values computed by the BLM and LC50 values determined from the two toxicity tests. Specifically, the effect of increasing calcium concentration (and hardness) on copper toxicity appears to be minimal. Also, there is fairly good agreement between the BLM and the two toxicity tests for test solutions containing elevated DOC, for which the LC50 is 3-to-5 times greater (less toxic) than the LC50 for the lower-DOC test water. This illustrates the protective effects of DOC on copper toxicity and demonstrates the ability of the BLM to predict these protective effects. In contrast, for test solutions with added iron there is a decrease in LC50 values (increase in toxicity) in results from the two C. dubia toxicity tests, and the agreement between BLM LC50 predictions and results from these toxicity tests is poor. The inability of the BLM to account for competitive iron binding to DOC or DOC fractionation may be a significant shortcoming of the BLM for predicting site- specific water-quality criteria in streams affected by iron-rich acidic drainage in mined and mineralized areas.
Cai, Rong-Rong; Zhang, Li-Zhi; Yan, Yuying
2017-01-01
Fibrous filters have been proved to be one of the most cost-effective way of particulate matters (specifically PM 2.5) purification. However, due to the complex structure of real fibrous filters, it is difficult to accurately predict the performance of PM2.5 removal. In this study, a new 3D filtration modeling approach is proposed to predict the removal efficiencies of particles by real fibrous filters, by taking the particle rebound effect into consideration. A real filter is considered and ...
Czech Academy of Sciences Publication Activity Database
Brabec, Marek; Konár, Ondřej; Pelikán, Emil; Malý, Marek
2008-01-01
Roč. 24, č. 4 (2008), s. 659-678 ISSN 0169-2070 R&D Projects: GA AV ČR 1ET400300513 Institutional research plan: CEZ:AV0Z10300504 Keywords : individual gas consumption * nonlinear mixed effects model * ARIMAX * ARX * generalized linear mixed model * conditional modeling Subject RIV: JE - Non-nuclear Energetics, Energy Consumption ; Use Impact factor: 1.685, year: 2008
Avsec, Žiga; Barekatain, Mohammadamin; Cheng, Jun; Gagneur, Julien
2017-11-16
Regulatory sequences are not solely defined by their nucleic acid sequence but also by their relative distances to genomic landmarks such as transcription start site, exon boundaries, or polyadenylation site. Deep learning has become the approach of choice for modeling regulatory sequences because of its strength to learn complex sequence features. However, modeling relative distances to genomic landmarks in deep neural networks has not been addressed. Here we developed spline transformation, a neural network module based on splines to flexibly and robustly model distances. Modeling distances to various genomic landmarks with spline transformations significantly increased state-of-the-art prediction accuracy of in vivo RNA-binding protein binding sites for 120 out of 123 proteins. We also developed a deep neural network for human splice branchpoint based on spline transformations that outperformed the current best, already distance-based, machine learning model. Compared to piecewise linear transformation, as obtained by composition of rectified linear units, spline transformation yields higher prediction accuracy as well as faster and more robust training. As spline transformation can be applied to further quantities beyond distances, such as methylation or conservation, we foresee it as a versatile component in the genomics deep learning toolbox. Spline transformation is implemented as a Keras layer in the CONCISE python package: https://github.com/gagneurlab/concise. Analysis code is available at goo.gl/3yMY5w. avsec@in.tum.de; gagneur@in.tum.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Hu, Jianlin; Li, Xun; Huang, Lin; Ying, Qi; Zhang, Qiang; Zhao, Bin; Wang, Shuxiao; Zhang, Hongliang
2017-11-01
Accurate exposure estimates are required for health effect analyses of severe air pollution in China. Chemical transport models (CTMs) are widely used to provide spatial distribution, chemical composition, particle size fractions, and source origins of air pollutants. The accuracy of air quality predictions in China is greatly affected by the uncertainties of emission inventories. The Community Multiscale Air Quality (CMAQ) model with meteorological inputs from the Weather Research and Forecasting (WRF) model were used in this study to simulate air pollutants in China in 2013. Four simulations were conducted with four different anthropogenic emission inventories, including the Multi-resolution Emission Inventory for China (MEIC), the Emission Inventory for China by School of Environment at Tsinghua University (SOE), the Emissions Database for Global Atmospheric Research (EDGAR), and the Regional Emission inventory in Asia version 2 (REAS2). Model performance of each simulation was evaluated against available observation data from 422 sites in 60 cities across China. Model predictions of O3 and PM2.5 generally meet the model performance criteria, but performance differences exist in different regions, for different pollutants, and among inventories. Ensemble predictions were calculated by linearly combining the results from different inventories to minimize the sum of the squared errors between the ensemble results and the observations in all cities. The ensemble concentrations show improved agreement with observations in most cities. The mean fractional bias (MFB) and mean fractional errors (MFEs) of the ensemble annual PM2.5 in the 60 cities are -0.11 and 0.24, respectively, which are better than the MFB (-0.25 to -0.16) and MFE (0.26-0.31) of individual simulations. The ensemble annual daily maximum 1 h O3 (O3-1h) concentrations are also improved, with mean normalized bias (MNB) of 0.03 and mean normalized errors (MNE) of 0.14, compared to MNB of 0.06-0.19 and
Applying risk and resilience models to predicting the effects of media violence on development.
Prot, Sara; Gentile, Douglas A
2014-01-01
Although the effects of media violence on children and adolescents have been studied for over 50 years, they remain controversial. Much of this controversy is driven by a misunderstanding of causality that seeks the cause of atrocities such as school shootings. Luckily, several recent developments in risk and resilience theories offer a way out of this controversy. Four risk and resilience models are described, including the cascade model, dose-response gradients, pathway models, and turning-point models. Each is described and applied to the existing media effects literature. Recommendations for future research are discussed with regard to each model. In addition, we examine current developments in theorizing that stressors have sensitizing versus steeling effects and recent interest in biological and gene by environment interactions. We also discuss several of the cultural aspects that have supported the polarization and misunderstanding of the literature, and argue that applying risk and resilience models to the theories and data offers a more balanced way to understand the subtle effects of media violence on aggression within a multicausal perspective.
Confidence scores for prediction models
DEFF Research Database (Denmark)
Gerds, Thomas Alexander; van de Wiel, MA
2011-01-01
In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation......, then rival strategies can still be compared based on repeated bootstraps of the same data. Often, however, the overall performance of rival strategies is similar and it is thus difficult to decide for one model. Here, we investigate the variability of the prediction models that results when the same...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...
Computer model for predicting the effect of inherited sterility on population growth
International Nuclear Information System (INIS)
Carpenter, J.E.; Layton, R.C.
1993-01-01
A Fortran based computer program was developed to facilitate modelling different inherited sterility data sets under various paradigms. The model was designed to allow variable input for several different parameters, such as rate of increase per generation, release ratio and initial population levels, reproductive rates and sex ratios resulting from different matings, and the number of nights a female is active in mating and oviposition. The model and computer program should be valuable tools for recognizing areas in which information is lacking and for identifying the effect that different parameters can have on the efficacy of the inherited sterility method. (author). 8 refs, 4 figs
Breuer, Aviva; Haj, Christeene G; Fogaça, Manoela V; Gomes, Felipe V; Silva, Nicole Rodrigues; Pedrazzi, João Francisco; Del Bel, Elaine A; Hallak, Jaime C; Crippa, José A; Zuardi, Antonio W; Mechoulam, Raphael; Guimarães, Francisco S
2016-01-01
Cannabidiol (CBD) is a major Cannabis sativa constituent, which does not cause the typical marijuana psychoactivity. However, it has been shown to be active in a numerous pharmacological assays, including mice tests for anxiety, obsessive-compulsive disorder, depression and schizophrenia. In human trials the doses of CBD needed to achieve effects in anxiety and schizophrenia are high. We report now the synthesis of 3 fluorinated CBD derivatives, one of which, 4'-F-CBD (HUF-101) (1), is considerably more potent than CBD in behavioral assays in mice predictive of anxiolytic, antidepressant, antipsychotic and anti-compulsive activity. Similar to CBD, the anti-compulsive effects of HUF-101 depend on cannabinoid receptors.
Directory of Open Access Journals (Sweden)
Aviva Breuer
Full Text Available Cannabidiol (CBD is a major Cannabis sativa constituent, which does not cause the typical marijuana psychoactivity. However, it has been shown to be active in a numerous pharmacological assays, including mice tests for anxiety, obsessive-compulsive disorder, depression and schizophrenia. In human trials the doses of CBD needed to achieve effects in anxiety and schizophrenia are high. We report now the synthesis of 3 fluorinated CBD derivatives, one of which, 4'-F-CBD (HUF-101 (1, is considerably more potent than CBD in behavioral assays in mice predictive of anxiolytic, antidepressant, antipsychotic and anti-compulsive activity. Similar to CBD, the anti-compulsive effects of HUF-101 depend on cannabinoid receptors.
Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph
2016-02-01
Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information.
An effective finite element model for the prediction of hydrogen induced cracking in steel pipelines
Traidia, Abderrazak
2012-11-01
This paper presents a comprehensive finite element model for the numerical simulation of Hydrogen Induced Cracking (HIC) in steel pipelines exposed to sulphurous compounds, such as hydrogen sulphide (H2S). The model is able to mimic the pressure build-up mechanism related to the recombination of atomic hydrogen into hydrogen gas within the crack cavity. In addition, the strong couplings between non-Fickian hydrogen diffusion, pressure build-up and crack extension are accounted for. In order to enhance the predictive capabilities of the proposed model, problem boundary conditions are based on actual in-field operating parameters, such as pH and partial pressure of H 2S. The computational results reported herein show that, during the extension phase, the propagating crack behaves like a trap attracting more hydrogen, and that the hydrostatic stress field at the crack tip speed-up HIC related crack initiation and growth. In addition, HIC is reduced when the pH increases and the partial pressure of H2S decreases. Furthermore, the relation between the crack growth rate and (i) the initial crack radius and position, (ii) the pipe wall thickness and (iii) the fracture toughness, is also evaluated. Numerical results agree well with experimental data retrieved from the literature. Copyright © 2012, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.
Li, Lianfa; Lurmann, Fred; Habre, Rima; Urman, Robert; Rappaport, Edward; Ritz, Beate; Chen, Jiu-Chiuan; Gilliland, Frank D; Wu, Jun
2017-09-05
Spatiotemporal models to estimate ambient exposures at high spatiotemporal resolutions are crucial in large-scale air pollution epidemiological studies that follow participants over extended periods. Previous models typically rely on central-site monitoring data and/or covered short periods, limiting their applications to long-term cohort studies. Here we developed a spatiotemporal model that can reliably predict nitrogen oxide concentrations with a high spatiotemporal resolution over a long time span (>20 years). Leveraging the spatially extensive highly clustered exposure data from short-term measurement campaigns across 1-2 years and long-term central site monitoring in 1992-2013, we developed an integrated mixed-effect model with uncertainty estimates. Our statistical model incorporated nonlinear and spatial effects to reduce bias. Identified important predictors included temporal basis predictors, traffic indicators, population density, and subcounty-level mean pollutant concentrations. Substantial spatial autocorrelation (11-13%) was observed between neighboring communities. Ensemble learning and constrained optimization were used to enhance reliability of estimation over a large metropolitan area and a long period. The ensemble predictions of biweekly concentrations resulted in an R 2 of 0.85 (RMSE: 4.7 ppb) for NO 2 and 0.86 (RMSE: 13.4 ppb) for NO x . Ensemble learning and constrained optimization generated stable time series, which notably improved the results compared with those from initial mixed-effects models.
The effect of loudness on the reverberance of music: reverberance prediction using loudness models.
Lee, Doheon; Cabrera, Densil; Martens, William L
2012-02-01
This study examines the auditory attribute that describes the perceived amount of reverberation, known as "reverberance." Listening experiments were performed using two signals commonly heard in auditoria: excerpts of orchestral music and western classical singing. Listeners adjusted the decay rate of room impulse responses prior to convolution with these signals, so as to match the reverberance of each stimulus to that of a reference stimulus. The analysis examines the hypothesis that reverberance is related to the loudness decay rate of the underlying room impulse response. This hypothesis is tested using computational models of time varying or dynamic loudness, from which parameters analogous to conventional reverberation parameters (early decay time and reverberation time) are derived. The results show that listening level significantly affects reverberance, and that the loudness-based parameters outperform related conventional parameters. Results support the proposed relationship between reverberance and the computationally predicted loudness decay function of sound in rooms. © 2012 Acoustical Society of America
International Nuclear Information System (INIS)
Van Winkle, W.
1977-01-01
Appropriate modeling techniques already exist for investigating some long-term consequences of the effects of radiation on natural aquatic populations and ecosystems, even if to date these techniques have not been used for this purpose. At the low levels of irradiation estimated to occur in natural aquatic systems, effects are difficult to detect at even the individual level much less the population or ecosystem level where the subtle effects of radiation are likely to be completely overshadowed by the effects of other environmental factors and stresses and the natural variability of the system. The claim that population and ecosystem models can be accurate and reliable predictive tools in assessing any stress has been oversold. Nonetheless, the use of these tools can be useful for learning more about the effects of radioactive releases on aquatic populations and ecosystems
DEFF Research Database (Denmark)
Olsen, Christina Kurre; Brennum, Lise Tøttrup; Kreilgaard, Mads
2008-01-01
response behaviour correlates well with the relationship between human dopamine D2 receptor occupancy and clinical effect. The aim of the present study was to evaluate how pharmacokinetic/pharmacodynamic (PK/PD) predictions of therapeutic effective steady-state plasma levels by means of conditioned...... for sertindole (+dehydrosertindole) and olanzapine were 3-4-fold too high whereas for haloperidol, clozapine and risperidone the predicted steady-state EC50 in conditioned avoidance responding rats correlated well with the therapeutically effective plasma levels observed in patients. Accordingly, the proposed PK...... of the present conditioned avoidance response procedure, in vivo striatal dopamine D2 receptor occupancy was determined in parallel using 3H-raclopride as the radioligand. The PK/PD relationship was established by modelling the time-response and time-plasma concentration data. We found the order of dopamine D2...
Chambers, Ute; Jones, Vincent P
2015-12-01
Orchard design and management practices can alter microclimate and, thus, potentially affect insect development. If sufficiently large, these deviations in microclimate can compromise the accuracy of phenology model predictions used in integrated pest management (IPM) programs. Sunburn causes considerable damage in the Pacific Northwest, United States, apple-producing region. Common prevention strategies include the use of fruit surface protectants, evaporative cooling (EC), or both. This study focused on the effect of EC on ambient temperatures and model predictions for four insects (codling moth, Cydia pomonella L.; Lacanobia fruitworm, Lacanobia subjuncta Grote and Robinson; oblique-banded leafroller, Choristoneura rosaceana Harris; and Pandemis leafroller, Pandemis pyrusana Kearfott). Over-tree EC was applied in July and August when daily maximum temperatures were predicted to be ≥30°C between 1200-1700 hours (15/15 min on/off interval) in 2011 and between 1200-1800 hours (15/10 min on/off interval, or continuous on) in 2012. Control plots were sprayed once with kaolin clay in early July. During interval and continuous cooling, over-tree cooling reduced average afternoon temperatures compared with the kaolin treatment by 2.1-3.2°C. Compared with kaolin-treated controls, codling moth and Lacanobia fruitworm egg hatch in EC plots was predicted to occur up to 2 d and 1 d late, respectively. The presence of fourth-instar oblique-banded leafroller and Pandemis leafroller was predicted to occur up to 2 d and 1 d earlier in EC plots, respectively. These differences in model predictions were negligible, suggesting that no adjustments in pest management timing are needed when using EC in high-density apple orchards. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Predicting the Effects of Man-Made Fishing Canals on Floodplain Inundation - A Modelling Study
Shastry, A. R.; Durand, M. T.; Neal, J. C.; Fernandez, A.; Hamilton, I.; Kari, S.; Laborde, S.; Mark, B. G.; Arabi, M.; Moritz, M.; Phang, S. C.
2016-12-01
The Logone floodplain in northern Cameroon is an excellent example of coupled human-natural systems because of strong couplings between the social, ecological and hydrologic systems. Overbank flow from the Logone River in September and October is essential for agriculture and fishing livelihoods. Fishers dig canals to catch fish during the flood's recession to the river in November and December by installing nets at the intersection of canals and the river. Fishing canals connect the river to natural depressions in the terrain and may serve as a man-made extension of the river drainage network. In the last four decades, there has been an exponential increase in the number of canals which may affect flood hydraulics and the fishery. The goal of this study is to characterize the relationship between the fishing canals and flood dynamics in the Logone floodplain, specifically, parameters of flooding and recession timings and the duration of inundation. To do so, we model the Bara region ( 30 km2) of the floodplain using LISFLOOD-FP, a two-dimensional hydrodynamic model with sub-grid parameterizations of canals. We use a simplified version of the hydraulic system at a grid-cell size of 30-m, using synthetic topography, parameterized fishing canals, and representing fishnets as a combination of weir and mesh screens. The inflow at Bara is obtained from a separate, lower resolution (1-km grid-cell) model forced by daily discharge records obtained from Katoa, located 25-km upstream of Bara. Preliminary results show more canals lead to early recession of flood and a shorter duration of flood inundation. A shorter duration of flood inundation reduces the period of fish growth and will affect fisher catch returns. Understanding the couplings within the system is important for predicting long-term dynamics and the impact of building more fishing canals.
Directory of Open Access Journals (Sweden)
Intamanee, J.
2004-03-01
Full Text Available Ammonia is a primary chemical used for preserving rubber latex. Consequently, the wastewater from concentrated rubber latex processing contains high ammonia concentration. The volatilization of ammonia from such wastewater may cause an air pollution problem such as the formation of an acid rain or an aerosol of ammonium nitrate and ammonium sulfate, which can seriously affect environment and human being. To assess the air pollution problem regarding atmospheric ammonia volatized from wastewater, the model for the prediction of ammonia volatilization rate and flux is therefore desirable. The purposes of this study were to investigate the effects of water temperature and pH on ammonia volatilization process and to develop a model to describe ammonia volatilization rate and flux including such effects.Ammonia volatilization from water was studied by using a volatilization tank (surface area = 780 cm2, volume = 7 L placed in a water bath in order to control the water temperature. The temperature and pH in the range of 25 to 50ºC and 5 to 11 were respectively investigated. The overall mass transfer coefficients of ammonia were measured as a function of temperature and pH. The quadratic multiple regression technique was used to obtain the model for mass transfer coefficient from experimental data. The model suggests that the overall mass transfer coefficient of ammonia increases with increasing water temperature and pH while the temperature-pH interaction retards the increasing characteristic of mass transfer coefficient. Thus the increasing in mass transfer coefficient at higher temperature and pH was slower than that at the lower one. The simple model for the prediction of ammonia volatilization rate and flux was developed based on mass transfer theory and mass transfer coefficient model obtained from this study. This simple ammonia emission model can be used to predict ammonia volatilization rate and flux at any pH, water temperature and
International Nuclear Information System (INIS)
Biguet, Alexandre; Hansen, Hubert; Brugiere, Timothee; Costa, Pedro; Borgnat, Pierre
2015-01-01
The measurement of the position of the chiral critical end point (CEP) in the QCD phase diagram is under debate. While it is possible to predict its position by using effective models specifically built to reproduce some of the features of the underlying theory (QCD), the quality of the predictions (e.g., the CEP position) obtained by such effective models, depends on whether solving the model equations constitute a well- or ill-posed inverse problem. Considering these predictions as being inverse problems provides tools to evaluate if the problem is ill-conditioned, meaning that infinitesimal variations of the inputs of the model can cause comparatively large variations of the predictions. If it is ill-conditioned, it has major consequences because of finite variations that could come from experimental and/or theoretical errors. In the following, we shall apply such a reasoning on the predictions of a particular Nambu-Jona-Lasinio model within the mean field + ring approximations, with special attention to the prediction of the chiral CEP position in the (T-μ) plane. We find that the problem is ill-conditioned (i.e. very sensitive to input variations) for the T-coordinate of the CEP, whereas, it is well-posed for the μ-coordinate of the CEP. As a consequence, when the chiral condensate varies in a 10MeV range, μ CEP varies far less. As an illustration to understand how problematic this could be, we show that the main consequence when taking into account finite variation of the inputs, is that the existence of the CEP itself cannot be predicted anymore: for a deviation as low as 0.6% with respect to vacuum phenomenology (well within the estimation of the first correction to the ring approximation) the CEP may or may not exist. (orig.)
Bootstrap prediction and Bayesian prediction under misspecified models
Fushiki, Tadayoshi
2005-01-01
We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...
Sivapalan, Murugesu; Ruprecht, John K.; Viney, Neil R.
1996-03-01
A long-term water balance model has been developed to predict the hydrological effects of land-use change (especially forest clearing) in small experimental catchments in the south-west of Western Australia. This small catchment model has been used as the building block for the development of a large catchment-scale model, and has also formed the basis for a coupled water and salt balance model, developed to predict the changes in stream salinity resulting from land-use and climate change. The application of the coupled salt and water balance model to predict stream salinities in two small experimental catchments, and the application of the large catchment-scale model to predict changes in water yield in a medium-sized catchment that is being mined for bauxite, are presented in Parts 2 and 3, respectively, of this series of papers.The small catchment model has been designed as a simple, robust, conceptually based model of the basic daily water balance fluxes in forested catchments. The responses of the catchment to rainfall and pan evaporation are conceptualized in terms of three interdependent subsurface stores A, B and F. Store A depicts a near-stream perched aquifer system; B represents a deeper, permanent groundwater system; and F is an intermediate, unsaturated infiltration store. The responses of these stores are characterized by a set of constitutive relations which involves a number of conceptual parameters. These parameters are estimated by calibration by comparing observed and predicted runoff. The model has performed very well in simulations carried out on Salmon and Wights, two small experimental catchments in the Collie River basin in south-west Western Australia. The results from the application of the model to these small catchments are presented in this paper.
Prediction models in complex terrain
DEFF Research Database (Denmark)
Marti, I.; Nielsen, Torben Skov; Madsen, Henrik
2001-01-01
The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... the performance of HIRLAM in particular with respect to wind predictions. To estimate the performance of the model two spatial resolutions (0,5 Deg. and 0.2 Deg.) and different sets of HIRLAM variables were used to predict wind speed and energy production. The predictions of energy production for the wind farms...... are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production...
MODEL PREDICTIVE CONTROL FUNDAMENTALS
African Journals Online (AJOL)
2012-07-02
Jul 2, 2012 ... Linear MPC. 1. Uses linear model: ˙x = Ax + Bu. 2. Quadratic cost function: F = xT Qx + uT Ru. 3. Linear constraints: Hx + Gu < 0. 4. Quadratic program. Nonlinear MPC. 1. Nonlinear model: ˙x = f(x, u). 2. Cost function can be nonquadratic: F = (x, u). 3. Nonlinear constraints: h(x, u) < 0. 4. Nonlinear program.
Bergen, Silas; Sheppard, Lianne; Sampson, Paul D; Kim, Sun-Young; Richards, Mark; Vedal, Sverre; Kaufman, Joel D; Szpiro, Adam A
2013-09-01
Studies estimating health effects of long-term air pollution exposure often use a two-stage approach: building exposure models to assign individual-level exposures, which are then used in regression analyses. This requires accurate exposure modeling and careful treatment of exposure measurement error. To illustrate the importance of accounting for exposure model characteristics in two-stage air pollution studies, we considered a case study based on data from the Multi-Ethnic Study of Atherosclerosis (MESA). We built national spatial exposure models that used partial least squares and universal kriging to estimate annual average concentrations of four PM2.5 components: elemental carbon (EC), organic carbon (OC), silicon (Si), and sulfur (S). We predicted PM2.5 component exposures for the MESA cohort and estimated cross-sectional associations with carotid intima-media thickness (CIMT), adjusting for subject-specific covariates. We corrected for measurement error using recently developed methods that account for the spatial structure of predicted exposures. Our models performed well, with cross-validated R2 values ranging from 0.62 to 0.95. Naïve analyses that did not account for measurement error indicated statistically significant associations between CIMT and exposure to OC, Si, and S. EC and OC exhibited little spatial correlation, and the corrected inference was unchanged from the naïve analysis. The Si and S exposure surfaces displayed notable spatial correlation, resulting in corrected confidence intervals (CIs) that were 50% wider than the naïve CIs, but that were still statistically significant. The impact of correcting for measurement error on health effect inference is concordant with the degree of spatial correlation in the exposure surfaces. Exposure model characteristics must be considered when performing two-stage air pollution epidemiologic analyses because naïve health effect inference may be inappropriate.
Predictive Model for the Analysis of the Effects of Underwater Impulsive Sources on Marine Life
National Research Council Canada - National Science Library
Lazauski, Colin J
2007-01-01
A method is provided to predict the biological consequences to marine animals from exposure to multiple underwater impulsive sources by simulating underwater explosions over a defined period of time...
Model-based fault diagnosis framework for effective predictive maintenance / B.B. Akindele
Akindele, Babatunde Babajide
2010-01-01
Predictive maintenance is a proactive maintenance strategy that is aimed at preventing the unexpected failure of equipment through condition monitoring of the health and performance of the equipment. Incessant equipment outage resulting in low availability of production facilities is a major issue in the Nigerian manufacturing environment. Improving equipment availability in Nigeria industry through institution of a full featured predictive maintenance has been suggested by many authors. T...
Modelling bankruptcy prediction models in Slovak companies
Directory of Open Access Journals (Sweden)
Kovacova Maria
2017-01-01
Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.
Directory of Open Access Journals (Sweden)
Mingjun Wang
Full Text Available Single amino acid variants (SAVs are the most abundant form of known genetic variations associated with human disease. Successful prediction of the functional impact of SAVs from sequences can thus lead to an improved understanding of the underlying mechanisms of why a SAV may be associated with certain disease. In this work, we constructed a high-quality structural dataset that contained 679 high-quality protein structures with 2,048 SAVs by collecting the human genetic variant data from multiple resources and dividing them into two categories, i.e., disease-associated and neutral variants. We built a two-stage random forest (RF model, termed as FunSAV, to predict the functional effect of SAVs by combining sequence, structure and residue-contact network features with other additional features that were not explored in previous studies. Importantly, a two-step feature selection procedure was proposed to select the most important and informative features that contribute to the prediction of disease association of SAVs. In cross-validation experiments on the benchmark dataset, FunSAV achieved a good prediction performance with the area under the curve (AUC of 0.882, which is competitive with and in some cases better than other existing tools including SIFT, SNAP, Polyphen2, PANTHER, nsSNPAnalyzer and PhD-SNP. The sourcecodes of FunSAV and the datasets can be downloaded at http://sunflower.kuicr.kyoto-u.ac.jp/sjn/FunSAV.
Melanoma Risk Prediction Models
Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Directory of Open Access Journals (Sweden)
Omar Khaleel Ismael Al-Kubaisi
2018-05-01
Full Text Available Shallow foundations are usually used for structures with light to moderate loads where the soil underneath can carry them. In some cases, soil strength and/or other properties are not adequate and require improvement using one of the ground improvement techniques. Stone column is one of the common improvement techniques in which a column of stone is installed vertically in clayey soils. Stone columns are usually used to increase soil strength and to accelerate soil consolidation by acting as vertical drains. Many researches have been done to estimate the behavior of the improved soil. However, none of them considered the effect of stone column geometry on the behavior of the circular footing. In this research, finite element models have been conducted to evaluate the behavior of a circular footing with different stone column configurations. Moreover, an Artificial Neural Network (ANN model has been generated for predicting these effects. The results showed a reduction in the bending moment, the settlement, and the vertical stresses with the increment of the stone column length, while both the horizontal stress and the shear force were increased. ANN model showed a good relationship between the predicted and the calculated results.
Predictive models of moth development
Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...
Radmerikhi, Samera; Tabatabaei, Seyed Vahid Ahmady; Jahani, Yunes; Mohseni, Mohabbat
2017-12-01
Changes in eating behavior can reduce the risk of developing cardiovascular disease. The aim of this study was to predict the effective factors of eating behaviors in the prevention of cardiovascular disease using the PRECEDE model. This cross-sectional study was performed on 400 subjects aged from 20 to 60 years old in Kerman, Iran in 2016. The participants were selected using a multistage random sampling method. A self-administered questionnaire including questions regarding demographic characteristics, eating behavior, and PRECEDE model constructs were completed by the participants. Data were analyzed using SPSS 22 and STATA 12. For data analysis, Spearman correlation coefficient, univariate and multiple median regression were applied. The predictive power of the model constructs was determined by analysis of artificial neural networks. Among participants, the score of knowledge was high (84.15±10.7), and the scores of perceived self-efficacy (59.1±16.57), reinforcing factors (60.66±14.01), enabling factors (56.5±12.91), and eating behavior (62.1±14.7) were intermediate, and the score of attitude was low (47.84±7.67). Attitude, self-perceived efficacy, enabling factors, and knowledge predicted 32%, 30%, 26%, and 0.93% of participants' eating behavior respectively. The relationship between all variables and eating behavior was positive and significant (pfactors the least correlation with eating behavior. According to the results of this study, self-efficacy, attitude, and enabling factors were the main predicting factors for eating behaviors; therefore, to prevent cardiovascular disease and enhance healthy eating behavior, it is recommended to change attitude, and enhance self-efficacy and enabling factors in the community.
Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P
2017-09-15
Major end users of Digital Soil Mapping (DSM) such as policy makers and agricultural extension workers are faced with choosing the appropriate remote sensing data. The objective of this research is to analyze the spatial resolution effects of different remote sensing images on soil prediction models in two smallholder farms in Southern India called Kothapally (Telangana State), and Masuti (Karnataka State), and provide empirical guidelines to choose the appropriate remote sensing images in DSM. Bayesian kriging (BK) was utilized to characterize the spatial pattern of exchangeable potassium (K ex ) in the topsoil (0-15 cm) at different spatial resolutions by incorporating spectral indices from Landsat 8 (30 m), RapidEye (5 m), and WorldView-2/GeoEye-1/Pleiades-1A images (2 m). Some spectral indices such as band reflectances, band ratios, Crust Index and Atmospherically Resistant Vegetation Index from multiple images showed relatively strong correlations with soil K ex in two study areas. The research also suggested that fine spatial resolution WorldView-2/GeoEye-1/Pleiades-1A-based and RapidEye-based soil prediction models would not necessarily have higher prediction performance than coarse spatial resolution Landsat 8-based soil prediction models. The end users of DSM in smallholder farm settings need select the appropriate spectral indices and consider different factors such as the spatial resolution, band width, spectral resolution, temporal frequency, cost, and processing time of different remote sensing images. Overall, remote sensing-based Digital Soil Mapping has potential to be promoted to smallholder farm settings all over the world and help smallholder farmers implement sustainable and field-specific soil nutrient management scheme. Copyright © 2017 Elsevier Ltd. All rights reserved.
Predictive Models and Computational Embryology
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
International Nuclear Information System (INIS)
Winkler, David A.
2016-01-01
Nanomaterials research is one of the fastest growing contemporary research areas. The unprecedented properties of these materials have meant that they are being incorporated into products very quickly. Regulatory agencies are concerned they cannot assess the potential hazards of these materials adequately, as data on the biological properties of nanomaterials are still relatively limited and expensive to acquire. Computational modelling methods have much to offer in helping understand the mechanisms by which toxicity may occur, and in predicting the likelihood of adverse biological impacts of materials not yet tested experimentally. This paper reviews the progress these methods, particularly those QSAR-based, have made in understanding and predicting potentially adverse biological effects of nanomaterials, and also the limitations and pitfalls of these methods. - Highlights: • Nanomaterials regulators need good information to make good decisions. • Nanomaterials and their interactions with biology are very complex. • Computational methods use existing data to predict properties of new nanomaterials. • Statistical, data driven modelling methods have been successfully applied to this task. • Much more must be learnt before robust toolkits will be widely usable by regulators.
Wattez, Y.C.M.; Tenpierik, M.J.; Nijs, L.
2018-01-01
Over the last few years, reverberation times and sound pressure levels have been measured in many sports halls. Most of these halls, for instance those made from stony materials, perform as predicted. However, sports halls constructed with profiled perforated steel roof panels have an unexpected
Gailani, Joseph Z; Lackey, Tahirih C; King, David B; Bryant, Duncan; Kim, Sung-Chan; Shafer, Deborah J
2016-03-01
Model studies were conducted to investigate the potential coral reef sediment exposure from dredging associated with proposed development of a deepwater wharf in Apra Harbor, Guam. The Particle Tracking Model (PTM) was applied to quantify the exposure of coral reefs to material suspended by the dredging operations at two alternative sites. Key PTM features include the flexible capability of continuous multiple releases of sediment parcels, control of parcel/substrate interaction, and the ability to efficiently track vast numbers of parcels. This flexibility has facilitated simulating the combined effects of sediment released from clamshell dredging and chiseling within Apra Harbor. Because the rate of material released into the water column by some of the processes is not well understood or known a priori, the modeling approach was to bracket parameters within reasonable ranges to produce a suite of potential results from multiple model runs. Sensitivity analysis to model parameters is used to select the appropriate parameter values for bracketing. Data analysis results include mapping the time series and the maximum values of sedimentation, suspended sediment concentration, and deposition rate. Data were used to quantify various exposure processes that affect coral species in Apra Harbor. The goal of this research is to develop a robust methodology for quantifying and bracketing exposure mechanisms to coral (or other receptors) from dredging operations. These exposure values were utilized in an ecological assessment to predict effects (coral reef impacts) from various dredging scenarios. Copyright © 2015. Published by Elsevier Ltd.
Predictions models with neural nets
Directory of Open Access Journals (Sweden)
Vladimír Konečný
2008-01-01
Full Text Available The contribution is oriented to basic problem trends solution of economic pointers, using neural networks. Problems include choice of the suitable model and consequently configuration of neural nets, choice computational function of neurons and the way prediction learning. The contribution contains two basic models that use structure of multilayer neural nets and way of determination their configuration. It is postulate a simple rule for teaching period of neural net, to get most credible prediction.Experiments are executed with really data evolution of exchange rate Kč/Euro. The main reason of choice this time series is their availability for sufficient long period. In carry out of experiments the both given basic kind of prediction models with most frequent use functions of neurons are verified. Achieve prediction results are presented as in numerical and so in graphical forms.
International Nuclear Information System (INIS)
Nielsen, Sven P.; Andersson, Kasper G.; Hansen, Hanne S.; Thoerring, Haavard; Joensen, Hans P.; Isaksson, Mats; Kostiainen, Eila; Suolanen, Vesa; Sigurgeirsson, Magnus A.; Palsson, Sigurour E.
2008-01-01
the northernmost areas used for grain crops, the crops are entirely spring grain crops, whereas in Denmark and Germany, winter crops are dominant. This gives large deviations in growth periods and development stages of the crops, particularly in the spring. This implies that first year doses from the same contaminant plume can be very different in different Nordic countries. Thus we conclude that the food habits of the population in the Nordic countries affected the calculated ingestion dose significantly. Also it is important to ascertain the use of state-of-the-art data for the more generic model parameters and to test the effect of other parameters to improve the decision support system used in the Nordic countries. (author)
Directory of Open Access Journals (Sweden)
Achillopoulou Dimitra
2014-12-01
Full Text Available The study deals with the investigation of the effect of casting deficiencies- both experimentally and analytically on axial yield load or reinforced concrete columns. It includes 6 specimens of square section (150x150x500 mm of 24.37 MPa nominal concrete strength with 4 longitudinal steel bars of 8 mm (500 MPa nominal strength with confinement ratio ωc=0.15. Through casting procedure the necessary provisions defined by International Standards were not applied strictly in order to create construction deficiencies. These deficiencies are quantified geometrically without the use of expensive and expertise non-destructive methods and their effect on the axial load capacity of the concrete columns is calibrated trough a novel and simplified prediction model extracted by an experimental and analytical investigation that included 6 specimens. It is concluded that: a even with suitable repair, load reduction up to 22% is the outcome of the initial construction damage presence, b the lower dispersion is noted for the section damage index proposed, c extended damage alters the failure mode to brittle accompanied with longitudinal bars buckling, d the proposed model presents more than satisfying results to the load capacity prediction of repaired columns.
Directory of Open Access Journals (Sweden)
Stacey D Finley
2013-07-01
Full Text Available Angiogenesis, the formation of new blood vessels from existing vasculature, is important in tumor growth and metastasis. A key regulator of angiogenesis is vascular endothelial growth factor (VEGF, which has been targeted in numerous anti-angiogenic therapies aimed at inhibiting tumor angiogenesis. Systems biology approaches, including computational modeling, are useful for understanding this complex biological process and can aid in the development of novel and effective therapeutics that target the VEGF family of proteins and receptors. We have developed a computational model of VEGF transport and kinetics in the tumor-bearing mouse, which includes three compartments: normal tissue, blood, and tumor. The model simulates human tumor xenografts and includes human (VEGF121 and VEGF165 and mouse (VEGF120 and VEGF164 isoforms. The model incorporates molecular interactions between these VEGF isoforms and receptors (VEGFR1 and VEGFR2, as well as co-receptors (NRP1 and NRP2. We also include important soluble factors: soluble VEGFR1 (sFlt-1 and α-2-macroglobulin. The model accounts for transport via macromolecular transendothelial permeability, lymphatic flow, and plasma clearance. We have fit the model to available in vivo experimental data on the plasma concentration of free VEGF Trap and VEGF Trap bound to mouse and human VEGF in order to estimate the rates at which parenchymal cells (myocytes and tumor cells and endothelial cells secrete VEGF. Interestingly, the predicted tumor VEGF secretion rates are significantly lower (0.007 – 0.023 molecules/cell/s, depending on the tumor microenvironment than most reported in vitro measurements (0.03 – 2.65 molecules/cell/s. The optimized model is used to investigate the interstitial and plasma VEGF concentrations and the effect of the VEGF-neutralizing agent, VEGF Trap (aflibercept. This work complements experimental studies performed in mice and provides a framework with which to examine the effects of
Hajibozorgi, M; Arjmand, N
2016-04-11
Range of motion (ROM) of the thoracic spine has implications in patient discrimination for diagnostic purposes and in biomechanical models for predictions of spinal loads. Few previous studies have reported quite different thoracic ROMs. Total (T1-T12), lower (T5-T12) and upper (T1-T5) thoracic, lumbar (T12-S1), pelvis, and entire trunk (T1) ROMs were measured using an inertial tracking device as asymptomatic subjects flexed forward from their neutral upright position to full forward flexion. Correlations between body height and the ROMs were conducted. An effect of measurement errors of the trunk flexion (T1) on the model-predicted spinal loads was investigated. Mean of peak voluntary total flexion of trunk (T1) was 118.4 ± 13.9°, of which 20.5 ± 6.5° was generated by flexion of the T1 to T12 (thoracic ROM), and the remaining by flexion of the T12 to S1 (lumbar ROM) (50.2 ± 7.0°) and pelvis (47.8 ± 6.9°). Lower thoracic ROM was significantly larger than upper thoracic ROM (14.8 ± 5.4° versus 5.8 ± 3.1°). There were non-significant weak correlations between body height and the ROMs. Contribution of the pelvis to generate the total trunk flexion increased from ~20% to 40% and that of the lumbar decreased from ~60% to 42% as subjects flexed forward from upright to maximal flexion while that of the thoracic spine remained almost constant (~16% to 20%) during the entire movement. Small uncertainties (±5°) in the measurement of trunk flexion angle resulted in considerable errors (~27%) in the model-predicted spinal loads only in activities involving small trunk flexion. Copyright © 2015 Elsevier Ltd. All rights reserved.
Effect of Nordic ciets on ECOSYS model predictions of ingestion doses
DEFF Research Database (Denmark)
Hansen, Hanne S.; Nielsen, Sven Poul; Andersson, Kasper Grann
2010-01-01
The ECOSYS model is used to estimate ingestion dose in the ARGOS and RODOS decision support systems for nuclear emergency management. It is recommended that nation-specific values for several parameters are used in the model. However, this is generally overlooked when the systems are used in prac...
Spatial scale effects on model parameter estimation and predictive uncertainty in ungauged basins
CSIR Research Space (South Africa)
Hughes, DA
2013-06-01
Full Text Available The most appropriate scale to use for hydrological modelling depends on the structure of the chosen model, the purpose of the results and the resolution of the available data used to quantify parameter values and provide the climatic forcing data...
Prediction of ionizing radiation effects in integrated circuits using black-box models
International Nuclear Information System (INIS)
Williamson, P.W.
1976-10-01
A method is described which allows general black-box modelling of integrated circuits as distinct from the existing method of deriving the radiation induced response of the model from actual terminal measurements on the device during irradiation. Both digital and linear circuits are discussed. (author)
Meads, C. A.; Cnossen, J. S.; Meher, S.; Juarez-Garcia, A.; ter Riet, G.; Duley, L.; Roberts, T. E.; Mol, B. W.; van der Post, J. A.; Leeflang, M. M.; Barton, P. M.; Hyde, C. J.; Gupta, J. K.; Khan, K. S.
2008-01-01
OBJECTIVES: To investigate the accuracy of predictive tests for pre-eclampsia and the effectiveness of preventative interventions for pre-eclampsia. Also to assess the cost-effectiveness of strategies (test-intervention combinations) to predict and prevent pre-eclampsia. DATA SOURCES: Major
Multi-model comparison highlights consistency in predicted effect of warming on a semi-arid shrub
Renwick, Katherine M.; Curtis, Caroline; Kleinhesselink, Andrew R.; Schlaepfer, Daniel R.; Bradley, Bethany A.; Aldridge, Cameron L.; Poulter, Benjamin; Adler, Peter B.
2018-01-01
A number of modeling approaches have been developed to predict the impacts of climate change on species distributions, performance, and abundance. The stronger the agreement from models that represent different processes and are based on distinct and independent sources of information, the greater the confidence we can have in their predictions. Evaluating the level of confidence is particularly important when predictions are used to guide conservation or restoration decisions. We used a multi-model approach to predict climate change impacts on big sagebrush (Artemisia tridentata), the dominant plant species on roughly 43 million hectares in the western United States and a key resource for many endemic wildlife species. To evaluate the climate sensitivity of A. tridentata, we developed four predictive models, two based on empirically derived spatial and temporal relationships, and two that applied mechanistic approaches to simulate sagebrush recruitment and growth. This approach enabled us to produce an aggregate index of climate change vulnerability and uncertainty based on the level of agreement between models. Despite large differences in model structure, predictions of sagebrush response to climate change were largely consistent. Performance, as measured by change in cover, growth, or recruitment, was predicted to decrease at the warmest sites, but increase throughout the cooler portions of sagebrush's range. A sensitivity analysis indicated that sagebrush performance responds more strongly to changes in temperature than precipitation. Most of the uncertainty in model predictions reflected variation among the ecological models, raising questions about the reliability of forecasts based on a single modeling approach. Our results highlight the value of a multi-model approach in forecasting climate change impacts and uncertainties and should help land managers to maximize the value of conservation investments.
Dutke, S.; Rinck, M.
2006-01-01
We investigated how the updating of spatial situation models during narrative comprehension depends on the interaction of cognitive abilities and text characteristics. Participants with low verbal and visuospatial abilities and participants with high abilities read narratives in which the
Using a Comprehensive Model to Test and Predict the Factors of Online Learning Effectiveness
He, Minyan
2013-01-01
As online learning is an important part of higher education, the effectiveness of online learning has been tested with different methods. Although the literature regarding online learning effectiveness has been related to various factors, a more comprehensive review of the factors may result in broader understanding of online learning…
Prediction of hysteretic effects in PZT stack actuators using a hybrid modeling strategy
Park, Jung-Kyu N.; Washington, Gregory N.
2004-07-01
In this paper, concepts associated with the Preisach model and nonlinear mapping functions (neural networks) are coupled to model the hysteretic behavior of piezoceramic actuators. Preisach concepts are utilized in choosing the initial data points and calculating the final displacements having nonlocal memory. In a traditional Preisach model generalization is typically handled by interpolation functions. These functions can lead to significant errors unless the number of data points is considerably high. In this study the generalization of all first order reversal curves is provided by a single neural network. The goal of this work was to enable real-time implementation and learning with a "limited" number of variables. Finally, a novel on-line training approach was developed to account for errors caused by frequency dependency and large variations of the input of the actuator. Results show excellent agreement between simulated and experimental results.
Gordon M. Heisler; Richard H. Grant; David J. Nowak; Wei Gao; Daniel E. Crane; Jeffery T. Walton
2003-01-01
Evaluating the impact of ultraviolet-B radiation (UVB) on urban populations would be enhanced by improved predictions of the UVB radiation at the level of human activity. This paper reports the status of plans for incorporating a UVB prediction module into an existing Urban Forest Effects (UFORE) model. UFORE currently has modules to quantify urban forest structure,...
Schmutz, Joel A.
2009-01-01
Yellow-billed loons (Gavia adamsii) breed in low densities in northern tundra habitats in Alaska, Canada, and Russia. They migrate to coastal marine habitats at mid to high latitudes where they spend their winters. Harvest may occur throughout the annual cycle, but of particular concern are recent reports of harvest from the Bering Strait region, which lies between Alaska and Russia and is an area used by yellow-billed loons during migration. Annual harvest for this region was reported to be 317, 45, and 1,077 during 2004, 2005, and 2007, respectively. I developed a population model to assess the effect of this reported harvest on population size and trend of yellow-billed loons. Because of the uncertainty regarding actual harvest and definition of the breeding population(s) affected by this harvest, I considered 25 different scenarios. Predicted trends across these 25 scenarios ranged from stability to rapid decline (24 percent per year) with halving of the population in 3 years. Through an assessment of literature and unpublished satellite tracking data, I suggest that the most likely of these 25 scenarios is one where the migrant population subjected to harvest in the Bering Strait includes individuals from breeding populations in Alaska (Arctic coastal plain and the Kotzebue region) and eastern Russia, and for which the magnitude of harvest varies among years and emulates the annual variation of reported harvest during 2004-07 (317, 45, and 1,077 yellow-billed loons). This scenario, which assumes no movement of Canadian breeders through the Bering Strait, predicts a 4.6 percent rate of annual population decline, which would halve the populations in 15 years. Although these model outputs reflect the best available information, confidence in these predictions and applicable scenarios would be greatly enhanced by more information on harvest, rates of survival and reproduction, and migratory pathways.
The Predictive Effect of Big Five Factor Model on Social Reactivity ...
African Journals Online (AJOL)
Data collected involved NEO Five Factor and social reactivity scale, which are commonly used and have demonstrated acceptability and reliability. The data were analysed using multiple regression and path analysis in order to estimate the coefficient of structural equations of the hypothesized model. The result indicates ...
Hobbelen, P.H.F.; van Gestel, C.A.M.
2007-01-01
The aim of this study was to predict the dependence on temperature and food density of effects of Cu on the litter consumption by the earthworm Lumbricus rubellus, using a dynamic energy budget model (DEB-model). As a measure of the effects of Cu on food consumption, EC50s (soil concentrations
Wang, Xuejiao; Zhang, Lizhen; Evers, Jochem B.; Mao, Lili; Wei, Shoujun; Pan, Xuebiao; Zhao, Xinhua; van der Werf, Wopke; Li, Zhaohu
2014-01-01
In general, the quality of fruits depends on local conditions experienced by the fruit during its development. In cotton, fruit quality, and more specifically the quality of the fibre in the fruit, depends on interactions between fruit position in the plant architecture, temperature and agronomic practices, such as sowing time, mulching with plastic film and topping of the plant's main stem and branches. To quantify this response of cotton fibre quality to environment and management, we developed a simulation model of cotton growth and development, CottonXL. Simulation of cotton fibre quality (strength, length and micronaire) was implemented at the level of each individual fruit, in relation to thermal time (represented by physiological age of the fruit) and prevailing temperature during development of each fruit. Field experiments were conducted in China in 2007 to determine model parameters, and independent data on cotton fibre quality in three cotton producing regions in China were used for model validation. Simulated values for fibre quality closely corresponded to experimental data. Scenario studies simulating a range of management practices predicted that delaying topping times can significantly decrease fibre quality, while sowing date and film mulching had no significant effect. We conclude that CottonXL may be used to explore options for optimizing cotton fibre quality by matching cotton management to the environment, taking into account responses at the level of individual fruits. The model may be used at plant, crop and regional levels to address climate and land-use change scenarios. PMID:25011385
Effect of including decay chains on predictions of equilibrium-type terrestrial food chain models
International Nuclear Information System (INIS)
Kirchner, G.
1990-01-01
Equilibrium-type food chain models are commonly used for assessing the radiological impact to man from environmental releases of radionuclides. Usually these do not take into account build-up of radioactive decay products during environmental transport. This may be a potential source of underprediction. For estimating consequences of this simplification, the equations of an internationally recognised terrestrial food chain model have been extended to include decay chains of variable length. Example calculations show that for releases from light water reactors as expected both during routine operation and in the case of severe accidents, the build-up of decay products during environmental transport is generally of minor importance. However, a considerable number of radionuclides of potential radiological significance have been identified which show marked contributions of decay products to calculated contamination of human food and resulting radiation dose rates. (author)
Directory of Open Access Journals (Sweden)
H Mohamadi Monavar
2017-10-01
Full Text Available Introduction Precision agriculture (PA is a technology that measures and manages within-field variability, such as physical and chemical properties of soil. The nondestructive and rapid VIS-NIR technology detected a significant correlation between reflectance spectra and the physical and chemical properties of soil. On the other hand, quantitatively predict of soil factors such as nitrogen, carbon, cation exchange capacity and the amount of clay in precision farming is very important. The emphasis of this paper is comparing different techniques of choosing calibration samples such as randomly selected method, chemical data and also based on PCA. Since increasing the number of samples is usually time-consuming and costly, then in this study, the best sampling way -in available methods- was predicted for calibration models. In addition, the effect of sample size on the accuracy of the calibration and validation models was analyzed. Materials and Methods Two hundred and ten soil samples were collected from cultivated farm located in Avarzaman in Hamedan province, Iran. The crop rotation was mostly potato and wheat. Samples were collected from a depth of 20 cm above ground and passed through a 2 mm sieve and air dried at room temperature. Chemical analysis was performed in the soil science laboratory, faculty of agriculture engineering, Bu-ali Sina University, Hamadan, Iran. Two Spectrometer (AvaSpec-ULS 2048- UV-VIS and (FT-NIR100N were used to measure the spectral bands which cover the UV-Vis and NIR region (220-2200 nm. Each soil sample was uniformly tiled in a petri dish and was scanned 20 times. Then the pre-processing methods of multivariate scatter correction (MSC and base line correction (BC were applied on the raw signals using Unscrambler software. The samples were divided into two groups: one group for calibration 105 and the second group was used for validation. Each time, 15 samples were selected randomly and tested the accuracy of
Spatial Economics Model Predicting Transport Volume
Directory of Open Access Journals (Sweden)
Lu Bo
2016-10-01
Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.
Urbanization Impacts on Mammals across Urban-Forest Edges and a Predictive Model of Edge Effects
Villaseñor, Nélida R.; Driscoll, Don A.; Escobar, Martín A. H.; Gibbons, Philip; Lindenmayer, David B.
2014-01-01
With accelerating rates of urbanization worldwide, a better understanding of ecological processes at the wildland-urban interface is critical to conserve biodiversity. We explored the effects of high and low-density housing developments on forest-dwelling mammals. Based on habitat characteristics, we expected a gradual decline in species abundance across forest-urban edges and an increased decline rate in higher contrast edges. We surveyed arboreal mammals in sites of high and low housing den...
Energy Technology Data Exchange (ETDEWEB)
Nam, Jin Hyun [School of Mechanical Engineering, Daegu University, Gyungsan (Korea, Republic of)
2017-04-15
The three-phase boundaries (TPBs) in the electrodes of solid oxide fuel cells (SOFCs) have different activity because of the distributed nature of the electrochemical reactions. The electrochemically active thickness (EAT) is a good measure to evaluate the extension of the active reaction zone into the electrode and the effective utilization of TPBs. In this study, an electrochemical reaction/charge conduction problem is formulated based on the Butler–Volmer reaction kinetics and then numerically solved to determine the EATs for the active electrode layers of SOFCs with various microstructural, dimensional, and property parameters. Thus, the EAT data and correlations presented in this study are expected to provide useful information for designing efficient electrodes of SOFCs.
Mohammad Safeeq; Guillaume S. Mauger; Gordon E. Grant; Ivan Arismendi; Alan F. Hamlet; Se-Yeun Lee
2014-01-01
Assessing uncertainties in hydrologic models can improve accuracy in predicting future streamflow. Here, simulated streamflows using the Variable Infiltration Capacity (VIC) model at coarse (1/16°) and fine (1/120°) spatial resolutions were evaluated against observed streamflows from 217 watersheds. In...
Bergstrom, Robert W.
1995-01-01
The demands of accurate predictions of radiative transfer for climate applications are well-documented. While much effort is being devoted to evaluating the accuracy of the GCM radiative transfer schemes, the problem of developing accurate, computationally efficient schemes for climate models still remains. This paper discusses our efforts in developing accurate and fast computational methods for global and regional climate models.
Chipps, S.R.; Einfalt, L.M.; Wahl, David H.
2000-01-01
We measured growth of age-0 tiger muskellunge as a function of ration size (25, 50, 75, and 100% C(max))and water temperature (7.5-25??C) and compared experimental results with those predicted from a bioenergetic model. Discrepancies between actual and predicted values varied appreciably with water temperature and growth rate. On average, model output overestimated winter consumption rates at 10 and 7.5??C by 113 to 328%, respectively, whereas model predictions in summer and autumn (20-25??C) were in better agreement with actual values (4 to 58%). We postulate that variation in model performance was related to seasonal changes in esocid metabolic rate, which were not accounted for in the bioenergetic model. Moreover, accuracy of model output varied with feeding and growth rate of tiger muskellunge. The model performed poorly for fish fed low rations compared with estimates based on fish fed ad libitum rations and was attributed, in part, to the influence of growth rate on the accuracy of bioenergetic predictions. Based on modeling simulations, we found that errors associated with bioenergetic parameters had more influence on model output when growth rate was low, which is consistent with our observations. In addition, reduced conversion efficiency at high ration levels may contribute to variable model performance, thereby implying that waste losses should be modeled as a function of ration size for esocids. Our findings support earlier field tests of the esocid bioenergetic model and indicate that food consumption is generally overestimated by the model, particularly in winter months and for fish exhibiting low feeding and growth rates.
Kleandrova, Valeria V; Luan, Feng; Speck-Planche, Alejandro; Cordeiro, M Natália D S
2015-01-01
The assessment of acute toxicity is one of the most important stages to ensure the safety of chemicals with potential applications in pharmaceutical sciences, biomedical research, or any other industrial branch. A huge and indiscriminate number of toxicity assays have been carried out on laboratory animals. In this sense, computational approaches involving models based on quantitative-structure activity/toxicity relationships (QSAR/QSTR) can help to rationalize time and financial costs. Here, we discuss the most significant advances in the last 6 years focused on the use of QSAR/QSTR models to predict acute toxicity of drugs/chemicals in laboratory animals, employing large and heterogeneous datasets. The advantages and drawbacks of the different QSAR/QSTR models are analyzed. As a contribution to the field, we introduce the first multitasking (mtk) QSTR model for simultaneous prediction of acute toxicity of compounds by considering different routes of administration, diverse breeds of laboratory animals, and the reliability of the experimental conditions. The mtk-QSTR model was based on artificial neural networks (ANN), allowing the classification of compounds as toxic or non-toxic. This model correctly classified more than 94% of the 1646 cases present in the whole dataset, and its applicability was demonstrated by performing predictions of different chemicals such as drugs, dietary supplements, and molecules which could serve as nanocarriers for drug delivery. The predictions given by the mtk-QSTR model are in very good agreement with the experimental results.
Model complexity control for hydrologic prediction
Schoups, G.; van de Giesen, N. C.; Savenije, H. H. G.
2008-12-01
A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore needed. We compare three model complexity control methods for hydrologic prediction, namely, cross validation (CV), Akaike's information criterion (AIC), and structural risk minimization (SRM). Results show that simulation of water flow using non-physically-based models (polynomials in this case) leads to increasingly better calibration fits as the model complexity (polynomial order) increases. However, prediction uncertainty worsens for complex non-physically-based models because of overfitting of noisy data. Incorporation of physically based constraints into the model (e.g., storage-discharge relationship) effectively bounds prediction uncertainty, even as the number of parameters increases. The conclusion is that overparameterization and equifinality do not lead to a continued increase in prediction uncertainty, as long as models are constrained by such physical principles. Complexity control of hydrologic models reduces parameter equifinality and identifies the simplest model that adequately explains the data, thereby providing a means of hydrologic generalization and classification. SRM is a promising technique for this purpose, as it (1) provides analytic upper bounds on prediction uncertainty, hence avoiding the computational burden of CV, and (2) extends the applicability of classic methods such as AIC to finite data. The main hurdle in applying SRM is the need for an a priori estimation of the complexity of the hydrologic model, as measured by its Vapnik-Chernovenkis (VC) dimension. Further research is needed in this area.
Directory of Open Access Journals (Sweden)
Anna-Sofie Stensgaard
2016-03-01
Full Text Available Currently, two broad types of approach for predicting the impact of climate change on vector-borne diseases can be distinguished: i empirical-statistical (correlative approaches that use statistical models of relationships between vector and/or pathogen presence and environmental factors; and ii process-based (mechanistic approaches that seek to simulate detailed biological or epidemiological processes that explicitly describe system behavior. Both have advantages and disadvantages, but it is generally acknowledged that both approaches have value in assessing the response of species in general to climate change. Here, we combine a previously developed dynamic, agentbased model of the temperature-sensitive stages of the Schistosoma mansoni and intermediate host snail lifecycles, with a statistical model of snail habitat suitability for eastern Africa. Baseline model output compared to empirical prevalence data suggest that the combined model performs better than a temperature-driven model alone, and highlights the importance of including snail habitat suitability when modeling schistosomiasis risk. There was general agreement among models in predicting changes in risk, with 24-36% of the eastern Africa region predicted to experience an increase in risk of up-to 20% as a result of increasing temperatures over the next 50 years. Vice versa the models predicted a general decrease in risk in 30-37% of the study area. The snail habitat suitability models also suggest that anthropogenically altered habitat play a vital role for the current distribution of the intermediate snail host, and hence we stress the importance of accounting for land use changes in models of future changes in schistosomiasis risk.
Sivapalan, Murugesu; Viney, Neil R.; Jeevaraj, Charles G.
1996-03-01
This paper presents an application of a long-term, large catchment-scale, water balance model developed to predict the effects of forest clearing in the south-west of Western Australia. The conceptual model simulates the basic daily water balance fluxes in forested catchments before and after clearing. The large catchment is divided into a number of sub-catchments (1-5 km2 in area), which are taken as the fundamental building blocks of the large catchment model. The responses of the individual subcatchments to rainfall and pan evaporation are conceptualized in terms of three inter-dependent subsurface stores A, B and F, which are considered to represent the moisture states of the subcatchments. Details of the subcatchment-scale water balance model have been presented earlier in Part 1 of this series of papers. The response of any subcatchment is a function of its local moisture state, as measured by the local values of the stores. The variations of the initial values of the stores among the subcatchments are described in the large catchment model through simple, linear equations involving a number of similarity indices representing topography, mean annual rainfall and level of forest clearing.The model is applied to the Conjurunup catchment, a medium-sized (39·6 km2) catchment in the south-west of Western Australia. The catchment has been heterogeneously (in space and time) cleared for bauxite mining and subsequently rehabilitated. For this application, the catchment is divided into 11 subcatchments. The model parameters are estimated by calibration, by comparing observed and predicted runoff values, over a 18 year period, for the large catchment and two of the subcatchments. Excellent fits are obtained.
Wang, Xuejiao; Zhang, Lizhen; Evers, Jochem B; Mao, Lili; Wei, Shoujun; Pan, Xuebiao; Zhao, Xinhua; van der Werf, Wopke; Li, Zhaohu
2014-07-09
In general, the quality of fruits depends on local conditions experienced by the fruit during its development. In cotton, fruit quality, and more specifically the quality of the fibre in the fruit, depends on interactions between fruit position in the plant architecture, temperature and agronomic practices, such as sowing time, mulching with plastic film and topping of the plant's main stem and branches. To quantify this response of cotton fibre quality to environment and management, we developed a simulation model of cotton growth and development, CottonXL. Simulation of cotton fibre quality (strength, length and micronaire) was implemented at the level of each individual fruit, in relation to thermal time (represented by physiological age of the fruit) and prevailing temperature during development of each fruit. Field experiments were conducted in China in 2007 to determine model parameters, and independent data on cotton fibre quality in three cotton producing regions in China were used for model validation. Simulated values for fibre quality closely corresponded to experimental data. Scenario studies simulating a range of management practices predicted that delaying topping times can significantly decrease fibre quality, while sowing date and film mulching had no significant effect. We conclude that CottonXL may be used to explore options for optimizing cotton fibre quality by matching cotton management to the environment, taking into account responses at the level of individual fruits. The model may be used at plant, crop and regional levels to address climate and land-use change scenarios. Published by Oxford University Press on behalf of the Annals of Botany Company.
Energy Technology Data Exchange (ETDEWEB)
Bahiraei, Mehdi [Kermanshah University of Technology, Kermanshah (Iran, Islamic Republic of); Hosseinalipour, Seyed Mostafa [Iran University of Science and Technology, Tehran (Iran, Islamic Republic of)
2013-08-15
A thermal dispersion model is utilized for simulation of convective heat transfer of water-TiO{sub 2} nanofluid for laminar flow in circular tube. Concentration distribution at cross section of the tube was obtained considering the effects of particle migration, and this concentration distribution was applied in the numerical solution. Numerical solution was done at Reynolds numbers of 500 to 2000 and mean concentrations of 0.5 to 3%. Meanwhile, an experimental study was conducted to investigate the accuracy of the results obtained from the numerical solution. Non-uniformity of the concentration distribution increases with raising mean concentration and Reynolds number. Thereby, for mean concentration of 3%, at Reynolds numbers of 500 and 2000, the concentration from wall to center of the tube increases 2.6 and 30.9%, respectively. In the dispersion model, application of non-uniform concentration distribution improves the accuracy in prediction of the convective heat transfer coefficient in comparison with applying uniform concentration.
Li, Guangjie; Cardiff University
2009-01-01
We study how stock return’s predictability and model uncertainty affect a rational buy-and-hold investor’s decision to allocate her wealth for different lengths of investment horizons in the UK market. We consider the FTSE All-Share Index as the risky asset, and the UK Treasury bill as the risk free asset in forming the investor’s portfolio. We identify the most powerful predictors of the stock return by accounting for model uncertainty. We find that though stock return predictability is weak...
Predictive validation of an influenza spread model.
Directory of Open Access Journals (Sweden)
Ayaz Hyder
Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve
Predictive Validation of an Influenza Spread Model
Hyder, Ayaz; Buckeridge, David L.; Leung, Brian
2013-01-01
Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive
Coughtrie, A R; Borman, D J; Sleigh, P A
2013-06-01
Flow in a gas-lift digester with a central draft-tube was investigated using computational fluid dynamics (CFD) and different turbulence closure models. The k-ω Shear-Stress-Transport (SST), Renormalization-Group (RNG) k-∊, Linear Reynolds-Stress-Model (RSM) and Transition-SST models were tested for a gas-lift loop reactor under Newtonian flow conditions validated against published experimental work. The results identify that flow predictions within the reactor (where flow is transitional) are particularly sensitive to the turbulence model implemented; the Transition-SST model was found to be the most robust for capturing mixing behaviour and predicting separation reliably. Therefore, Transition-SST is recommended over k-∊ models for use in comparable mixing problems. A comparison of results obtained using multiphase Euler-Lagrange and singlephase approaches are presented. The results support the validity of the singlephase modelling assumptions in obtaining reliable predictions of the reactor flow. Solver independence of results was verified by comparing two independent finite-volume solvers (Fluent-13.0sp2 and OpenFOAM-2.0.1). Copyright © 2013 Elsevier Ltd. All rights reserved.
Simoneau, Martin; Guillaud, Étienne; Blouin, Jean
2013-06-12
Rotation of the torso while reaching produces torques (e.g., Coriolis torque) that deviate the arm from its planned trajectory. To ensure an accurate reaching movement, the brain may take these perturbing torques into account during movement planning or, alternatively, it may correct hand trajectory during movement execution. Irrespective of the process selected, it is expected that an underestimation of trunk rotation would likely induce inaccurate shoulder and elbow torques, resulting in hand deviation. Nonetheless, it is still undetermined to what extent a small error in the perception of trunk rotations, translating into an inappropriate selection of motor commands, would affect reaching accuracy. To investigate, we adapted a biomechanical model (J Neurophysiol 89: 276-289, 2003) to predict the consequences of underestimating trunk rotations on right hand reaching movements performed during either clockwise or counter clockwise torso rotations. The results revealed that regardless of the degree to which the torso rotation was underestimated, the amplitude of hand deviation was much larger for counter clockwise rotations than for clockwise rotations. This was attributed to the fact that the Coriolis and centripetal joint torques were acting in the same direction during counter clockwise rotation yet in opposite directions during clockwise rotations, effectively cancelling each other out. These findings suggest that in order to anticipate and compensate for the interaction torques generated during torso rotation while reaching, the brain must have an accurate prediction of torso rotation kinematics. The present study proposes that when designing upper limb prostheses controllers, adding a sensor to monitor trunk kinematics may improve prostheses control and performance.
Directory of Open Access Journals (Sweden)
T. Sigi eHale
2015-05-01
Full Text Available Background: We previously hypothesized that poor task-directed sensory information processing should be indexed by increased weighting of right hemisphere (RH biased attention and visuo-perceptual brain functions during task operations, and have demonstrated this phenotype in ADHD across multiple studies, using multiple methodologies. However, in our recent Distributed Effects Model of ADHD, we surmised that this phenotype is not ADHD specific, but rather more broadly reflective of any circumstance that disrupts the induction and maintenance of an emergent task-directed neural architecture. Under this view, increased weighting of RH biased attention and visuo-perceptual brain functions is expected to generally index neurocognitive sets that are not optimized for task-directed thought and action, and when durable expressed, liability for ADHD. Method: The current study tested this view by examining whether previously identified rightward parietal EEG asymmetry in ADHD was associated with common ADHD characteristics and comorbidities (i.e., ADHD risk factors. Results: Barring one exception (non-right handedness, we found that it was. Rightward parietal asymmetry was associated with carrying the DRD4-7R risk allele, being male, having mood disorder, and having anxiety disorder. However, differences in the specific expression of rightward parietal asymmetry were observed, which are discussed in relation to possible unique mechanisms underlying ADHD liability in different ADHD RFs. Conclusion: Rightward parietal asymmetry appears to be a durable feature of ADHD liability, as predicted by the Distributed Effects Perspective Model of ADHD. Moreover, variability in the expression of this phenotype may shed light on different sources of ADHD liability.
Posterior predictive checking of multiple imputation models.
Nguyen, Cattram D; Lee, Katherine J; Carlin, John B
2015-07-01
Multiple imputation is gaining popularity as a strategy for handling missing data, but there is a scarcity of tools for checking imputation models, a critical step in model fitting. Posterior predictive checking (PPC) has been recommended as an imputation diagnostic. PPC involves simulating "replicated" data from the posterior predictive distribution of the model under scrutiny. Model fit is assessed by examining whether the analysis from the observed data appears typical of results obtained from the replicates produced by the model. A proposed diagnostic measure is the posterior predictive "p-value", an extreme value of which (i.e., a value close to 0 or 1) suggests a misfit between the model and the data. The aim of this study was to evaluate the performance of the posterior predictive p-value as an imputation diagnostic. Using simulation methods, we deliberately misspecified imputation models to determine whether posterior predictive p-values were effective in identifying these problems. When estimating the regression parameter of interest, we found that more extreme p-values were associated with poorer imputation model performance, although the results highlighted that traditional thresholds for classical p-values do not apply in this context. A shortcoming of the PPC method was its reduced ability to detect misspecified models with increasing amounts of missing data. Despite the limitations of posterior predictive p-values, they appear to have a valuable place in the imputer's toolkit. In addition to automated checking using p-values, we recommend imputers perform graphical checks and examine other summaries of the test quantity distribution. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ronald E. McRoberts; Veronica C. Lessard
2001-01-01
Uncertainty in diameter growth predictions is attributed to three general sources: measurement error or sampling variability in predictor variables, parameter covariances, and residual or unexplained variation around model expectations. Using measurement error and sampling variability distributions obtained from the literature and Monte Carlo simulation methods, the...
Model predictive control of smart microgrids
DEFF Research Database (Denmark)
Hu, Jiefeng; Zhu, Jianguo; Guerrero, Josep M.
2014-01-01
required to realise high-performance of distributed generations and will realise innovative control techniques utilising model predictive control (MPC) to assist in coordinating the plethora of generation and load combinations, thus enable the effective exploitation of the clean renewable energy sources...
Hara, Yuko; Pestilli, Franco; Gardner, Justin L
2014-01-01
Single-unit measurements have reported many different effects of attention on contrast-response (e.g., contrast-gain, response-gain, additive-offset dependent on visibility), while functional imaging measurements have more uniformly reported increases in response across all contrasts (additive-offset). The normalization model of attention elegantly predicts the diversity of effects of attention reported in single-units well-tuned to the stimulus, but what predictions does it make for more realistic populations of neurons with heterogeneous tuning? Are predictions in accordance with population-scale measurements? We used functional imaging data from humans to determine a realistic ratio of attention-field to stimulus-drive size (a key parameter for the model) and predicted effects of attention in a population of model neurons with heterogeneous tuning. We found that within the population, neurons well-tuned to the stimulus showed a response-gain effect, while less-well-tuned neurons showed a contrast-gain effect. Averaged across the population, these disparate effects of attention gave rise to additive-offsets in contrast-response, similar to reports in human functional imaging as well as population averages of single-units. Differences in predictions for single-units and populations were observed across a wide range of model parameters (ratios of attention-field to stimulus-drive size and the amount of baseline response modifiable by attention), offering an explanation for disparity in physiological reports. Thus, by accounting for heterogeneity in tuning of realistic neuronal populations, the normalization model of attention can not only predict responses of well-tuned neurons, but also the activity of large populations of neurons. More generally, computational models can unify physiological findings across different scales of measurement, and make links to behavior, but only if factors such as heterogeneous tuning within a population are properly accounted for.
Pirolli, Peter; Mohan, Shiwali; Venkatakrishnan, Anusha; Nelson, Les; Silva, Michael; Springer, Aaron
2017-11-30
participant had a marginal effect on daily goal success (coefficient=0.0694, SE=0.0410, OR=1.0717, 95% CI -0.01116 to 0.1505, P=.09), and the time since acknowledging receipt of a reminder was highly significant (coefficient=-0.0490, SE=0.0104, OR=0.9522, 95% CI -0.0700 to -0.2852], PA dual system ACT-R mathematical model was fit to individuals' daily goal successes and reminder acknowledgments: a goal-striving system dependent on declarative memory plus a habit-forming system that acquires automatic procedures for performance of behavioral goals. Computational cognitive theory such as ACT-R can be used to make precise quantitative predictions concerning daily health behavior goal success in response to implementation intentions and the dosing schedules of reminders. ©Peter Pirolli, Shiwali Mohan, Anusha Venkatakrishnan, Les Nelson, Michael Silva, Aaron Springer. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 30.11.2017.
Nagasawa, Tsuyoshi; Hanamura, Katsunori
2017-06-01
The reliability of analytical model for hydrogen oxidation at Ni/YSZ anode in solid oxide fuel cell named as species territory adsorption model has been improved by introducing referenced thermodynamic and kinetic parameters predicted by density function theory calculations. The model can explicitly predict anode overpotential using unknown values of quantities of state for oxygen migration process in YSZ near a triple phase boundary (TPB), frequency factor for hydrogen oxidation, and effective anode thickness. The former two are determined through careful fitting process between the predicted and experimental results of Ni/YSZ cermet and Ni-patterned anodes. This makes it possible to estimate effective anode thickness, which tends to increase with temperature in six kinds of Ni/YSZ anodes in references. In addition, the comparison between the proposed model and a published numerical simulation indicates that the model can predict more precise dependence of anode overpotential on steam partial pressure than that by Butler-Volmer equation with empirical exchange current density. The introduction of present model into numerical simulation instead of Butler-Volmer equation can give more accurate prediction of anode polarization.
Predictive analytics can support the ACO model.
Bradley, Paul
2012-04-01
Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.
Coughtrie, AR; Borman, DJ; Sleigh, PA
2013-01-01
Flow in a gas-lift digester with a central draft-tube was investigated using computational fluid dynamics (CFD) and different turbulence closure models. The k-ω Shear-Stress-Transport (SST), Renormalization-Group (RNG) k-∊, Linear Reynolds-Stress-Model (RSM) and Transition-SST models were tested for a gas-lift loop reactor under Newtonian flow conditions validated against published experimental work. The results identify that flow predictions within the reactor (where flow is transitional) ar...
Directory of Open Access Journals (Sweden)
Kyung Ah Koo
2015-04-01
Full Text Available Alpine, subalpine and boreal tree species, of low genetic diversity and adapted to low optimal temperatures, are vulnerable to the warming effects of global climate change. The accurate prediction of these species’ distributions in response to climate change is critical for effective planning and management. The goal of this research is to predict climate change effects on the distribution of red spruce (Picea rubens Sarg. in the Great Smoky Mountains National Park (GSMNP, eastern USA. Climate change is, however, conflated with other environmental factors, making its assessment a complex systems problem in which indirect effects are significant in causality. Predictions were made by linking a tree growth simulation model, red spruce growth model (ARIM.SIM, to a GIS spatial model, red spruce habitat model (ARIM.HAB. ARIM.SIM quantifies direct and indirect interactions between red spruce and its growth factors, revealing the latter to be dominant. ARIM.HAB spatially distributes the ARIM.SIM simulations under the assumption that greater growth reflects higher probabilities of presence. ARIM.HAB predicts the future habitat suitability of red spruce based on growth predictions of ARIM.SIM under climate change and three air pollution scenarios: 10% increase, no change and 10% decrease. Results show that suitable habitats shrink most when air pollution increases. Higher temperatures cause losses of most low-elevation habitats. Increased precipitation and air pollution produce acid rain, which causes loss of both low- and high-elevation habitats. The general prediction is that climate change will cause contraction of red spruce habitats at both lower and higher elevations in GSMNP, and the effects will be exacerbated by increased air pollution. These predictions provide valuable information for understanding potential impacts of global climate change on the spatiotemporal distribution of red spruce habitats in GSMNP.
Accessible tools to quantify adverse outcomes pathways (AOPs) that can predict the ecological effects of chemicals and other stressors are a major goal of Chemical Safety and Sustainability research within US EPA’s Office of Research and Development. To address this goal, w...
Chandramouli, Bharadwaj; Jang, Myoseon; Kamens, Richard M.
The partitioning of a diverse set of semivolatile organic compounds (SOCs) on a variety of organic aerosols was studied using smog chamber experimental data. Existing data on the partitioning of SOCs on aerosols from wood combustion, diesel combustion, and the α-pinene-O 3 reaction was augmented by carrying out smog chamber partitioning experiments on aerosols from meat cooking, and catalyzed and uncatalyzed gasoline engine exhaust. Model compositions for aerosols from meat cooking and gasoline combustion emissions were used to calculate activity coefficients for the SOCs in the organic aerosols and the Pankow absorptive gas/particle partitioning model was used to calculate the partitioning coefficient Kp and quantitate the predictive improvements of using the activity coefficient. The slope of the log K p vs. log p L0 correlation for partitioning on aerosols from meat cooking improved from -0.81 to -0.94 after incorporation of activity coefficients iγ om. A stepwise regression analysis of the partitioning model revealed that for the data set used in this study, partitioning predictions on α-pinene-O 3 secondary aerosol and wood combustion aerosol showed statistically significant improvement after incorporation of iγ om, which can be attributed to their overall polarity. The partitioning model was sensitive to changes in aerosol composition when updated compositions for α-pinene-O 3 aerosol and wood combustion aerosol were used. The octanol-air partitioning coefficient's ( KOA) effectiveness as a partitioning correlator over a variety of aerosol types was evaluated. The slope of the log K p- log K OA correlation was not constant over the aerosol types and SOCs used in the study and the use of KOA for partitioning correlations can potentially lead to significant deviations, especially for polar aerosols.
Modelling language evolution: Examples and predictions
Gong, Tao; Shuai, Lan; Zhang, Menghan
2014-06-01
We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.
Berlin, William H.; Brooke, L.T.; Stone, Linda J.
1977-01-01
Eggs stripped from lake whitefish (Coregonus clupeaformis) spawning in Lake Michigan were incubated in the laboratory at temperatures similar to those on whitefish spawning grounds in Lake Michigan during December-April. Observed times from fertilization to attainment of each of 21 developmental stages were used to test a model that predicts the rate of development at daily fluctuating temperatures; the model relates rate of development for any given stage j, expressed as the reciprocal of time (Rj), to temperature (T). The generalized equation for a developmental stage is Rj = abTcT??.
A predictive model for dimensional errors in fused deposition modeling
DEFF Research Database (Denmark)
Stolfi, A.
2015-01-01
This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....
Walsh, Colin; Hripcsak, George
2014-12-01
Hospital readmission risk prediction remains a motivated area of investigation and operations in light of the hospital readmissions reduction program through CMS. Multiple models of risk have been reported with variable discriminatory performances, and it remains unclear how design factors affect performance. To study the effects of varying three factors of model development in the prediction of risk based on health record data: (1) reason for readmission (primary readmission diagnosis); (2) available data and data types (e.g. visit history, laboratory results, etc); (3) cohort selection. Regularized regression (LASSO) to generate predictions of readmissions risk using prevalence sampling. Support Vector Machine (SVM) used for comparison in cohort selection testing. Calibration by model refitting to outcome prevalence. Predicting readmission risk across multiple reasons for readmission resulted in ROC areas ranging from 0.92 for readmission for congestive heart failure to 0.71 for syncope and 0.68 for all-cause readmission. Visit history and laboratory tests contributed the most predictive value; contributions varied by readmission diagnosis. Cohort definition affected performance for both parametric and nonparametric algorithms. Compared to all patients, limiting the cohort to patients whose index admission and readmission diagnoses matched resulted in a decrease in average ROC from 0.78 to 0.55 (difference in ROC 0.23, p value 0.01). Calibration plots demonstrate good calibration with low mean squared error. Targeting reason for readmission in risk prediction impacted discriminatory performance. In general, laboratory data and visit history data contributed the most to prediction; data source contributions varied by reason for readmission. Cohort selection had a large impact on model performance, and these results demonstrate the difficulty of comparing results across different studies of predictive risk modeling. Copyright © 2014 Elsevier Inc. All rights
Hedrick, Mark S; Moon, Il Joon; Woo, Jihwan; Won, Jong Ho
2016-01-01
Previous studies have shown that concurrent vowel identification improves with increasing temporal onset asynchrony of the vowels, even if the vowels have the same fundamental frequency. The current study investigated the possible underlying neural processing involved in concurrent vowel perception. The individual vowel stimuli from a previously published study were used as inputs for a phenomenological auditory-nerve (AN) model. Spectrotemporal representations of simulated neural excitation patterns were constructed (i.e., neurograms) and then matched quantitatively with the neurograms of the single vowels using the Neurogram Similarity Index Measure (NSIM). A novel computational decision model was used to predict concurrent vowel identification. To facilitate optimum matches between the model predictions and the behavioral human data, internal noise was added at either neurogram generation or neurogram matching using the NSIM procedure. The best fit to the behavioral data was achieved with a signal-to-noise ratio (SNR) of 8 dB for internal noise added at the neurogram but with a much smaller amount of internal noise (SNR of 60 dB) for internal noise added at the level of the NSIM computations. The results suggest that accurate modeling of concurrent vowel data from listeners with normal hearing may partly depend on internal noise and where internal noise is hypothesized to occur during the concurrent vowel identification process.
Abazari, Alireza; Thompson, Richard B; Elliott, Janet A W; McGann, Locksley E
2012-03-21
Knowledge of the spatial and temporal distribution of cryoprotective agent (CPA) is necessary for the cryopreservation of articular cartilage. Cartilage dehydration and shrinkage, as well as the change in extracellular osmolality, may have a significant impact on chondrocyte survival during and after CPA loading, freezing, and thawing, and during CPA unloading. In the literature, Fick's law of diffusion is commonly used to predict the spatial distribution and overall concentration of the CPA in the cartilage matrix, and the shrinkage and stress-strain in the cartilage matrix during CPA loading are neglected. In this study, we used a previously described biomechanical model to predict the spatial and temporal distributions of CPA during loading. We measured the intrinsic inhomogeneities in initial water and fixed charge densities in the cartilage using magnetic resonance imaging and introduced them into the model as initial conditions. We then compared the prediction results with the results obtained using uniform initial conditions. The simulation results in this study demonstrate the presence of a significant mechanical strain in the matrix of the cartilage, within all layers, during CPA loading. The osmotic response of the chondrocytes to the cartilage dehydration during CPA loading was also simulated. The results reveal that a transient shrinking occurs to different levels, and the chondrocytes experience a significant decrease in volume, particularly in the middle and deep zones of articular cartilage, during CPA loading. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Ulutan, Durul
2013-01-01
In the aerospace industry, titanium and nickel-based alloys are frequently used for critical structural components, especially due to their higher strength at both low and high temperatures, and higher wear and chemical degradation resistance. However, because of their unfavorable thermal properties, deformation and friction-induced microstructural changes prevent the end products from having good surface integrity properties. In addition to surface roughness, microhardness changes, and microstructural alterations, the machining-induced residual stress profiles of titanium and nickel-based alloys contribute in the surface integrity of these products. Therefore, it is essential to create a comprehensive method that predicts the residual stress outcomes of machining processes, and understand how machining parameters (cutting speed, uncut chip thickness, depth of cut, etc.) or tool parameters (tool rake angle, cutting edge radius, tool material/coating, etc.) affect the machining-induced residual stresses. Since experiments involve a certain amount of error in measurements, physics-based simulation experiments should also involve an uncertainty in the predicted values, and a rich set of simulation experiments are utilized to create expected value and variance for predictions. As the first part of this research, a method to determine the friction coefficients during machining from practical experiments was introduced. Using these friction coefficients, finite element-based simulation experiments were utilized to determine flow stress characteristics of materials and then to predict the machining-induced forces and residual stresses, and the results were validated using the experimental findings. A sensitivity analysis on the numerical parameters was conducted to understand the effect of changing physical and numerical parameters, increasing the confidence on the selected parameters, and the effect of machining parameters on machining-induced forces and residual
International Nuclear Information System (INIS)
O'Donoghue, J.A.
1986-01-01
These letters discuss the problems associated with the fact that the normal tissue isoeffect formulae based on the Ellis equation (1969) do not correctly account for the late-occurring effects of fractionated radiotherapy, and with the extension of the linear quadratic model to include continuous low dose-rate radiotherapy with constant or decaying sources by R.G. Dale (1985). J.A. O'Donoghue points out that the 'late effects' and CRE curves correspond closely, whilst the 'acute effects; and CRE curves are in obvious disagreement. For continuous low-dose-rate radiotherapy, the CRE and late effects quadratic model are in agreement. Useful bibliography. (U.K.)
Genetic models of homosexuality: generating testable predictions
Gavrilets, Sergey; Rice, William R
2006-01-01
Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344
Soheyli, Saeed; Khanlari, Marzieh Varasteh
2016-04-01
Effects of the various neutron emission energy spectra, as well as the influence of the angular momentum of pre-scission neutrons on theoretical predictions of fission fragment angular anisotropies for several heavy-ion induced fission systems are considered. Although theoretical calculations of angular anisotropy are very sensitive to neutron emission correction, the effects of the different values of kinetic energy of emitted neutrons derived from the various neutron emission energy spectra before reaching to the saddle point on the prediction of fission fragment angular distribution by the model are not significant and can be neglected, since these effects on angular anisotropies of fission fragments for a wide range of fissility parameters and excitation energies of compound nuclei are not more than 10%. Furthermore, the theoretical prediction of fission fragment angular anisotropy is not sensitive to the angular momentum of emitted neutrons.
Liaw, Horng-Jang; Wang, Tzu-Ai
2007-03-06
Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.
Link Prediction via Sparse Gaussian Graphical Model
Directory of Open Access Journals (Sweden)
Liangliang Zhang
2016-01-01
Full Text Available Link prediction is an important task in complex network analysis. Traditional link prediction methods are limited by network topology and lack of node property information, which makes predicting links challenging. In this study, we address link prediction using a sparse Gaussian graphical model and demonstrate its theoretical and practical effectiveness. In theory, link prediction is executed by estimating the inverse covariance matrix of samples to overcome information limits. The proposed method was evaluated with four small and four large real-world datasets. The experimental results show that the area under the curve (AUC value obtained by the proposed method improved by an average of 3% and 12.5% compared to 13 mainstream similarity methods, respectively. This method outperforms the baseline method, and the prediction accuracy is superior to mainstream methods when using only 80% of the training set. The method also provides significantly higher AUC values when using only 60% in Dolphin and Taro datasets. Furthermore, the error rate of the proposed method demonstrates superior performance with all datasets compared to mainstream methods.
Boonpawa, Rungnapa
2017-01-01
Flavonoids, abundantly present in fruits and vegetables, have been reported to exert various positive health effects based on in vitro bioassays. However, effects detected in in vitro models cannot be directly correlated to human health as most in vitro studies have been performed using flavonoid
Boonpawa, Rungnapa
2017-01-01
Flavonoids, abundantly present in fruits and vegetables, have been reported to exert various positive health effects based on in vitro bioassays. However, effects detected in in vitro models cannot be directly correlated to human health as most in vitro studies have
Romain, Ahmed Jérôme; Horwath, Caroline; Bernard, Paquito
2018-01-01
The purpose of the present study was to compare prediction of physical activity (PA) by experiential or behavioral processes of change (POCs) or an interaction between both types of processes. A cross-sectional study. This study was conducted using an online questionnaire. A total of 394 participants (244 women, 150 men), with a mean age of 35.12 ± 12.04 years and a mean body mass index of 22.97 ± 4.25 kg/m 2 were included. Participants completed the Processes of Change, Stages of Change questionnaires, and the International Physical Activity Questionnaire to evaluate self-reported PA level (total, vigorous, and moderate PA). Hierarchical multiple regression models were used to test the prediction of PA level. For both total PA (β = .261; P processes are most prominent in PA behavior. Nevertheless, it is of interest to note that the interaction between experiential and behavioral POCs was the only element predicting moderate PA level. Experiential processes were not associated with PA level.
Pieters, Sigrid; Saeys, Wouter; Van den Kerkhof, Tom; Goodarzi, Mohammad; Hellings, Mario; De Beer, Thomas; Heyden, Yvan Vander
2013-01-25
Owing to spectral variations from other sources than the component of interest, large investments in the NIR model development may be required to obtain satisfactory and robust prediction performance. To make the NIR model development for routine active pharmaceutical ingredient (API) prediction in tablets more cost-effective, alternative modelling strategies were proposed. They used a massive amount of prior spectral information on intra- and inter-batch variation and the pure component spectra to define a clutter, i.e., the detrimental spectral information. This was subsequently used for artificial data augmentation and/or orthogonal projections. The model performance improved statistically significantly, with a 34-40% reduction in RMSEP while needing fewer model latent variables, by applying the following procedure before PLS regression: (1) augmentation of the calibration spectra with the spectral shapes from the clutter, and (2) net analyte pre-processing (NAP). The improved prediction performance was not compromised when reducing the variability in the calibration set, making exhaustive calibration unnecessary. Strong water content variations in the tablets caused frequency shifts of the API absorption signals that could not be included in the clutter. Updating the model for this kind of variation demonstrated that the completeness of the clutter is critical for the performance of these models and that the model will only be more robust for spectral variation that is not co-linear with the one from the property of interest. Copyright © 2012 Elsevier B.V. All rights reserved.
An Anisotropic Hardening Model for Springback Prediction
International Nuclear Information System (INIS)
Zeng, Danielle; Xia, Z. Cedric
2005-01-01
As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test
A statistical model for predicting muscle performance
Byerly, Diane Leslie De Caix
The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing
Iowa calibration of MEPDG performance prediction models.
2013-06-01
This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...
Foundation Settlement Prediction Based on a Novel NGM Model
Directory of Open Access Journals (Sweden)
Peng-Yu Chen
2014-01-01
Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.
International Nuclear Information System (INIS)
Monte, Luigi
2013-01-01
The present work describes the application of a non-linear Leslie model for predicting the effects of ionising radiation on wild populations. The model assumes that, for protracted chronic irradiation, the effect-dose relationship is linear. In particular, the effects of radiation are modelled by relating the increase in the mortality rates of the individuals to the dose rates through a proportionality factor C. The model was tested using independent data and information from a series of experiments that were aimed at assessing the response to radiation of wild populations of meadow voles and whose results were described in the international literature. The comparison of the model results with the data selected from the above mentioned experiments showed that the model overestimated the detrimental effects of radiation on the size of irradiated populations when the values of C were within the range derived from the median lethal dose (L 50 ) for small mammals. The described non-linear model suggests that the non-expressed biotic potential of the species whose growth is limited by processes of environmental resistance, such as the competition among the individuals of the same or of different species for the exploitation of the available resources, can be a factor that determines a more effective response of population to the radiation effects. -- Highlights: • A model to assess the radiation effects on wild population is described. • The model is based on non-linear Leslie matrix. • The model is applied to small mammals living in an irradiated meadow. • Model output is conservative if effect-dose factor estimated from L 50 is used. • Systemic response to stress of populations in competitive conditions may be more effective
International Nuclear Information System (INIS)
Jung, Eui Guk; Boo, Joon Hong
2008-01-01
This study deals with a mathematical modeling for the steady-state temperature characteristics of an entire loop heat pipe. The lumped layer model was applied to each node for temperature analysis. The flat type evaporator and condenser in the model had planar dimensions of 40 mm (W) x 50 mm (L). The wick material was a sintered metal and the working fluid was methanol. The molecular kinetic theory was employed to model the phase change phenomena in the evaporator and the condenser. Liquid-vapor interface configuration was expressed by the thin film theories available in the literature. Effects of design factors of loop heat pipe on the thermal performance were investigated by the modeling proposed in this study
Directory of Open Access Journals (Sweden)
Katya L Masconi
Full Text Available Imputation techniques used to handle missing data are based on the principle of replacement. It is widely advocated that multiple imputation is superior to other imputation methods, however studies have suggested that simple methods for filling missing data can be just as accurate as complex methods. The objective of this study was to implement a number of simple and more complex imputation methods, and assess the effect of these techniques on the performance of undiagnosed diabetes risk prediction models during external validation.Data from the Cape Town Bellville-South cohort served as the basis for this study. Imputation methods and models were identified via recent systematic reviews. Models' discrimination was assessed and compared using C-statistic and non-parametric methods, before and after recalibration through simple intercept adjustment.The study sample consisted of 1256 individuals, of whom 173 were excluded due to previously diagnosed diabetes. Of the final 1083 individuals, 329 (30.4% had missing data. Family history had the highest proportion of missing data (25%. Imputation of the outcome, undiagnosed diabetes, was highest in stochastic regression imputation (163 individuals. Overall, deletion resulted in the lowest model performances while simple imputation yielded the highest C-statistic for the Cambridge Diabetes Risk model, Kuwaiti Risk model, Omani Diabetes Risk model and Rotterdam Predictive model. Multiple imputation only yielded the highest C-statistic for the Rotterdam Predictive model, which were matched by simpler imputation methods.Deletion was confirmed as a poor technique for handling missing data. However, despite the emphasized disadvantages of simpler imputation methods, this study showed that implementing these methods results in similar predictive utility for undiagnosed diabetes when compared to multiple imputation.
Model complexity control for hydrologic prediction
Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.
2008-01-01
A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore
Zhang, Dan; Chen, Anqiang; Zhao, Jixia; Lu, Chuanhao; Liu, Gangcai
2017-10-01
Rock decay is mainly the result of the combined effects of moisture content and temperature, but little is known about the quantitative relationship between these variables and the rate of rock decay. In this study we develop quantitative calculation models of rock decay rate under laboratory conditions and validate the efficiency of these models by comparing the predicted rock decay mass and that measured for rock exposed in the field. Rainfall and temperature data in the field were standardised to a dimensionless moisture content and temperature variables, respectively, and then the predicted rock decay mass was calculated by the models. The measured rock decay mass was determined by manual sieving. Based on our previously determined relationship between a single factor (moisture content or temperature) and the rate of rock decay in the laboratory, power function models are developed. Results show that the rock decay mass calculated by the model was comparable with field data, with averaged relative errors of 1.53%, 9.00% and 11.82% for the Tuodian group (J3t), Matoushan group (K2m) and Lufeng group (J1l), respectively, which are mainly due to inaccurate transformation of field rainfall into the rock moisture content and artificial disturbance when the samples were sieved in the field. Our results show that the developed models based on laboratory-derived rates can accurately predict the decay rates of mudstones exposed in the field.
Directory of Open Access Journals (Sweden)
Katsutoshi Yoshizato
2009-01-01
Full Text Available Preclinical studies to predict the efficacy and safety of drugs have conventionally been conducted almost exclusively in mice and rats as rodents, despite the differences in drug metabolism between humans and rodents. Furthermore, human (ℎ viruses such as hepatitis viruses do not infect the rodent liver. A mouse bearing a liver in which the hepatocytes have been largely repopulated with ℎ-hepatocytes would overcome some of these disadvantages. We have established a practical, efficient, and large-scale production system for such mice. Accumulated evidence has demonstrated that these hepatocyte-humanized mice are a useful and reliable animal model, exhibiting ℎ-type responses in a series of in vivo drug processing (adsorption, distribution, metabolism, excretion experiments and in the infection and propagation of hepatic viruses. In this review, we present the current status of studies on chimeric mice and describe their usefulness in the study of peroxisome proliferator-activated receptors.
Persing, T. Ray; Bellish, Christine A.; Brandon, Jay; Kenney, P. Sean; Carzoo, Susan; Buttrill, Catherine; Guenther, Arlene
2005-01-01
Several aircraft airframe modeling approaches are currently being used in the DoD community for acquisition, threat evaluation, training, and other purposes. To date there has been no clear empirical study of the impact of airframe simulation fidelity on piloted real-time aircraft simulation study results, or when use of a particular level of fidelity is indicated. This paper documents a series of piloted simulation studies using three different levels of airframe model fidelity. This study was conducted using the NASA Langley Differential Maneuvering Simulator. Evaluations were conducted with three pilots for scenarios requiring extensive maneuvering of the airplanes during air combat. In many cases, a low-fidelity modified point-mass model may be sufficient to evaluate the combat effectiveness of the aircraft. However, in cases where high angle-of-attack flying qualities and aerodynamic performance are a factor or when precision tracking ability of the aircraft must be represented, use of high-fidelity models is indicated.
Staying Power of Churn Prediction Models
Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.
In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging
Liberti, Maria V; Dai, Ziwei; Wardell, Suzanne E; Baccile, Joshua A; Liu, Xiaojing; Gao, Xia; Baldi, Robert; Mehrmohamadi, Mahya; Johnson, Marc O; Madhukar, Neel S; Shestov, Alexander A; Chio, Iok I Christine; Elemento, Olivier; Rathmell, Jeffrey C; Schroeder, Frank C; McDonnell, Donald P; Locasale, Jason W
2017-10-03
Targeted cancer therapies that use genetics are successful, but principles for selectively targeting tumor metabolism that is also dependent on the environment remain unknown. We now show that differences in rate-controlling enzymes during the Warburg effect (WE), the most prominent hallmark of cancer cell metabolism, can be used to predict a response to targeting glucose metabolism. We establish a natural product, koningic acid (KA), to be a selective inhibitor of GAPDH, an enzyme we characterize to have differential control properties over metabolism during the WE. With machine learning and integrated pharmacogenomics and metabolomics, we demonstrate that KA efficacy is not determined by the status of individual genes, but by the quantitative extent of the WE, leading to a therapeutic window in vivo. Thus, the basis of targeting the WE can be encoded by molecular principles that extend beyond the status of individual genes. Copyright © 2017 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Arefa Jafarzadeh Kohneloo
2015-09-01
Full Text Available Background: Recent studies have shown that effective genes on survival time of cancer patients play an important role as a risk factor or preventive factor. Present study was designed to determine effective genes on survival time for diffuse large B-cell lymphoma patients and predict the survival time using these selected genes. Materials & Methods: Present study is a cohort study was conducted on 40 patients with diffuse large B-cell lymphoma. For these patients, 2042 gene expression was measured. In order to predict the survival time, the composition of the semi-parametric additive survival model with two gene selection methods elastic net and lasso were used. Two methods were evaluated by plotting area under the ROC curve over time and calculating the integral of this curve. Results: Based on our findings, the elastic net method identified 10 genes, and Lasso-Cox method identified 7 genes. GENE3325X increased the survival time (P=0.006, Whereas GENE3980X and GENE377X reduced the survival time (P=0.004. These three genes were selected as important genes in both methods. Conclusion: This study showed that the elastic net method outperformed the common Lasso method in terms of predictive power. Moreover, apply the additive model instead Cox regression and using microarray data is usable way for predict the survival time of patients.
Directory of Open Access Journals (Sweden)
K. Lee
2002-01-01
Full Text Available This paper reports the application to vegetation canopies of a coherent model for the propagation of electromagnetic radiation through a stratified medium. The resulting multi-layer vegetation model is plausibly realistic in that it recognises the dielectric permittivity of the vegetation matter, the mixing of the dielectric permittivities for vegetation and air within the canopy and, in simplified terms, the overall vertical distribution of dielectric permittivity and temperature through the canopy. Any sharp changes in the dielectric profile of the canopy resulted in interference effects manifested as oscillations in the microwave brightness temperature as a function of canopy height or look angle. However, when Gaussian broadening of the top and bottom of the canopy (reflecting the natural variability between plants was included within the model, these oscillations were eliminated. The model parameters required to specify the dielectric profile within the canopy, particularly the parameters that quantify the dielectric mixing between vegetation and air in the canopy, are not usually available in typical field experiments. Thus, the feasibility of specifying these parameters using an advanced single-criterion, multiple-parameter optimisation technique was investigated by automatically minimizing the difference between the modelled and measured brightness temperatures. The results imply that the mixing parameters can be so determined but only if other parameters that specify vegetation dry matter and water content are measured independently. The new model was then applied to investigate the sensitivity of microwave emission to specific vegetation parameters. Keywords: passive microwave, soil moisture, vegetation, SMOS, retrieval
Comparison of Prediction-Error-Modelling Criteria
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...
Directory of Open Access Journals (Sweden)
Kimberly J Van Meter
Full Text Available Nutrient legacies in anthropogenic landscapes, accumulated over decades of fertilizer application, lead to time lags between implementation of conservation measures and improvements in water quality. Quantification of such time lags has remained difficult, however, due to an incomplete understanding of controls on nutrient depletion trajectories after changes in land-use or management practices. In this study, we have developed a parsimonious watershed model for quantifying catchment-scale time lags based on both soil nutrient accumulations (biogeochemical legacy and groundwater travel time distributions (hydrologic legacy. The model accurately predicted the time lags observed in an Iowa watershed that had undergone a 41% conversion of area from row crop to native prairie. We explored the time scales of change for stream nutrient concentrations as a function of both natural and anthropogenic controls, from topography to spatial patterns of land-use change. Our results demonstrate that the existence of biogeochemical nutrient legacies increases time lags beyond those due to hydrologic legacy alone. In addition, we show that the maximum concentration reduction benefits vary according to the spatial pattern of intervention, with preferential conversion of land parcels having the shortest catchment-scale travel times providing proportionally greater concentration reductions as well as faster response times. In contrast, a random pattern of conversion results in a 1:1 relationship between percent land conversion and percent concentration reduction, irrespective of denitrification rates within the landscape. Our modeling framework allows for the quantification of tradeoffs between costs associated with implementation of conservation measures and the time needed to see the desired concentration reductions, making it of great value to decision makers regarding optimal implementation of watershed conservation measures.
DEFF Research Database (Denmark)
Berning, Torsten
2014-01-01
The micro-porous layer (MPL) in a proton exchange membrane fuel cell is frequently believed to constitute a barrier for the liquid water owing to its low hydraulic permeability compared to the porous substrate. When micro-channels are carved into the MPL on the side facing the catalyst layer...... conditions. This modeling study investigates the effect of such micro-channels on the predicted membrane hydration level for a predetermined set of operating conditions with a three-dimensional computational fluid dynamics model that utilizes the multi-fluid approach....
Models for predicting fuel consumption in sagebrush-dominated ecosystems
Clinton S. Wright
2013-01-01
Fuel consumption predictions are necessary to accurately estimate or model fire effects, including pollutant emissions during wildland fires. Fuel and environmental measurements on a series of operational prescribed fires were used to develop empirical models for predicting fuel consumption in big sagebrush (Artemisia tridentate Nutt.) ecosystems....
Calibration of PMIS pavement performance prediction models.
2012-02-01
Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...
Predictive Model Assessment for Count Data
National Research Council Canada - National Science Library
Czado, Claudia; Gneiting, Tilmann; Held, Leonhard
2007-01-01
.... In case studies, we critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. Key words: Calibration...
Modeling and Prediction Using Stochastic Differential Equations
DEFF Research Database (Denmark)
Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp
2016-01-01
deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs......) for modeling and forecasting. It is argued that this gives models and predictions which better reflect reality. The SDE approach also offers a more adequate framework for modeling and a number of efficient tools for model building. A software package (CTSM-R) for SDE-based modeling is briefly described....... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...
Wu, Ming-Chang; Lin, Gwo-Fong; Feng, Lei; Hwang, Gong-Do
2017-04-01
In Taiwan, heavy rainfall brought by typhoons often causes serious disasters and leads to loss of life and property. In order to reduce the impact of these disasters, accurate rainfall forecasts are always important for civil protection authorities to prepare proper measures in advance. In this study, a methodology is proposed for providing very short-term (1- to 6-h ahead) rainfall forecasts in a basin-scale area. The proposed methodology is developed based on the use of analogy reasoning approach to effectively integrate the ensemble precipitation forecasts from a numerical weather prediction system in Taiwan. To demonstrate the potential of the proposed methodology, an application to a basin-scale area (the Choshui River basin located in west-central Taiwan) during five typhoons is conducted. The results indicate that the proposed methodology yields more accurate hourly rainfall forecasts, especially the forecasts with a lead time of 1 to 3 hours. On average, improvement of the Nash-Sutcliffe efficiency coefficient is about 14% due to the effective use of the ensemble forecasts through the proposed methodology. The proposed methodology is expected to be useful for providing accurate very short-term rainfall forecasts during typhoons.
Directory of Open Access Journals (Sweden)
Woochul Nam
Full Text Available Kinesins are molecular motors which walk along microtubules by moving their heads to different binding sites. The motion of kinesin is realized by a conformational change in the structure of the kinesin molecule and by a diffusion of one of its two heads. In this study, a novel model is developed to account for the 2D diffusion of kinesin heads to several neighboring binding sites (near the surface of microtubules. To determine the direction of the next step of a kinesin molecule, this model considers the extension in the neck linkers of kinesin and the dynamic behavior of the coiled-coil structure of the kinesin neck. Also, the mechanical interference between kinesins and obstacles anchored on the microtubules is characterized. The model predicts that both the kinesin velocity and run length (i.e., the walking distance before detaching from the microtubule are reduced by static obstacles. The run length is decreased more significantly by static obstacles than the velocity. Moreover, our model is able to predict the motion of kinesin when other (several motors also move along the same microtubule. Furthermore, it suggests that the effect of mechanical interaction/interference between motors is much weaker than the effect of static obstacles. Our newly developed model can be used to address unanswered questions regarding degraded transport caused by the presence of excessive tau proteins on microtubules.
Predictive models for arteriovenous fistula maturation.
Al Shakarchi, Julien; McGrogan, Damian; Van der Veer, Sabine; Sperrin, Matthew; Inston, Nicholas
2016-05-07
Haemodialysis (HD) is a lifeline therapy for patients with end-stage renal disease (ESRD). A critical factor in the survival of renal dialysis patients is the surgical creation of vascular access, and international guidelines recommend arteriovenous fistulas (AVF) as the gold standard of vascular access for haemodialysis. Despite this, AVFs have been associated with high failure rates. Although risk factors for AVF failure have been identified, their utility for predicting AVF failure through predictive models remains unclear. The objectives of this review are to systematically and critically assess the methodology and reporting of studies developing prognostic predictive models for AVF outcomes and assess them for suitability in clinical practice. Electronic databases were searched for studies reporting prognostic predictive models for AVF outcomes. Dual review was conducted to identify studies that reported on the development or validation of a model constructed to predict AVF outcome following creation. Data were extracted on study characteristics, risk predictors, statistical methodology, model type, as well as validation process. We included four different studies reporting five different predictive models. Parameters identified that were common to all scoring system were age and cardiovascular disease. This review has found a small number of predictive models in vascular access. The disparity between each study limits the development of a unified predictive model.
Model Predictive Control Fundamentals | Orukpe | Nigerian Journal ...
African Journals Online (AJOL)
Model Predictive Control (MPC) has developed considerably over the last two decades, both within the research control community and in industries. MPC strategy involves the optimization of a performance index with respect to some future control sequence, using predictions of the output signal based on a process model, ...
Unreachable Setpoints in Model Predictive Control
DEFF Research Database (Denmark)
Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp
2008-01-01
In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optim...
Micro-mechanical studies on graphite strength prediction models
Kanse, Deepak; Khan, I. A.; Bhasin, V.; Vaze, K. K.
2013-06-01
The influence of type of loading and size-effects on the failure strength of graphite were studied using Weibull model. It was observed that this model over-predicts size effect in tension. However, incorporation of grain size effect in Weibull model, allows a more realistic simulation of size effects. Numerical prediction of strength of four-point bend specimen was made using the Weibull parameters obtained from tensile test data. Effective volume calculations were carried out and subsequently predicted strength was compared with experimental data. It was found that Weibull model can predict mean flexural strength with reasonable accuracy even when grain size effect was not incorporated. In addition, the effects of microstructural parameters on failure strength were analyzed using Rose and Tucker model. Uni-axial tensile, three-point bend and four-point bend strengths were predicted using this model and compared with the experimental data. It was found that this model predicts flexural strength within 10%. For uni-axial tensile strength, difference was 22% which can be attributed to less number of tests on tensile specimens. In order to develop failure surface of graphite under multi-axial state of stress, an open ended hollow tube of graphite was subjected to internal pressure and axial load and Batdorf model was employed to calculate failure probability of the tube. Bi-axial failure surface was generated in the first and fourth quadrant for 50% failure probability by varying both internal pressure and axial load.
Hybrid approaches to physiologic modeling and prediction
Olengü, Nicholas O.; Reifman, Jaques
2005-05-01
This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.
Directory of Open Access Journals (Sweden)
Erol Muzır
2010-09-01
Full Text Available This paper is prepared to test the common opinion that the multifactor asset pricing models produce superior predictions as compared to the single factor models and to evaluate the performance of Arbitrage Pricing Theory (APT and Capital Asset Pricing Model (CAPM. For this purpose, the monthly return data from January 1996 and December 2004 of the stocks of 45 firms listed at Istanbul Stock Exchange were used. Our factor analysis results show that 68,3 % of the return variation can be explained by five factors. Although the APT model has generated a low coefficient of determination, 28,3 %, it proves to be more competent in explaining stock return changes when compared to CAPM which has an inferior explanation power, 5,4 %. Furthermore, we have observed that APT is more robust also in capturing the effects of any economic crisis on return variations.
Lucretia Olson; M. Schwartz
2013-01-01
Many species at high trophic levels are predicted to be impacted by shifts in habitat associated with climate change. While temperate coniferous forests are predicted to be one of the least affected ecosystems, the impact of shifting habitat on terrestrial carnivores that live within these ecosystems may depend on the dispersal rates of the species and the patchiness...
Wang, Xuedong; Ji, Dongxue; Chen, Xiaolin; Ma, Yibing; Yang, Junxing; Ma, Jingxing; Li, Xiaoxiu
2017-11-01
Current risk assessment models for metals such as the biotic ligand model (BLM) are usually applied to individual metals, yet toxic metals are rarely found singly in the environment. In the present research, the toxicity of Cu and Zn alone and together were studied in wheat (Triticum aestivum L.) using different Ca 2+ and Mg 2+ concentrations, pH levels and Zn:Cu concentration ratios. The aim of the study was to better understand the toxicity effects of these two metals using BLMs and toxic units (TUs) from single and combined metal toxicity data. The results of single-metal toxicity tests showed that toxicity of Cu and Zn tended to decrease with increasing Ca 2+ or Mg 2+ concentrations, and that the effects of pH on Cu and Zn toxicity were related not only to free Cu 2+ and Zn 2+ activity, respectively, but also to other inorganic metal complex species. For the metal mixture, Cu-Zn interactions based on free ion activities were primarily additive for the different Ca 2+ and Mg 2+ concentrations and levels of pH. The toxicity data of individual metals derived by the BLM, which incorporated Ca 2+ and Mg 2+ competition and toxicity of inorganic metal complexes in a single-metal toxicity assessment, could predict the combined toxicity as a function of TU. There was good performance between the predicted and observed effects (root mean square error [RMSE] = 7.15, R 2 = 0.97) compared to that using a TU method with a model based on free ion activity (RMSE = 14.29, R 2 = 0.86). The overall findings indicated that bioavailability models that include those biochemistry processes may accurately predict the toxicity of metal mixtures. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cangioli, Filippo; Pennacchi, Paolo; Vannini, Giuseppe; Ciuchicchi, Lorenzo
2018-01-01
The influence of sealing components on the rotordynamic stability of turbomachinery has become a key topic because the oil and gas market is increasingly demanding high rotational speeds and high efficiency. This leads the turbomachinery manufacturers to design higher flexibility ratios and to reduce the clearance of the seals. Accurate prediction of the effective damping of seals is critical to avoid instability problems; in recent years, "negative-swirl" swirl brakes have been used to reverse the circumferential direction of the inlet flow, which changes the sign of the cross-coupled stiffness coefficients and generates stabilizing forces. Experimental tests for a teeth-on-stator labyrinth seal were performed by manufacturers with positive and negative pre-swirl values to investigate the pre-swirl effect on the cross-coupled stiffness coefficient. Those results are used as a benchmark in this paper. To analyse the rotor-fluid interaction in the seals, the bulk-flow numeric approach is more time efficient than computational fluid dynamics (CFD). Although the accuracy of the coefficients prediction in bulk-flow models is satisfactory for liquid phase application, the accuracy of the results strongly depends on the operating conditions in the case of the gas phase. In this paper, the authors propose an improvement in the state-of-the-art bulk-flow model by introducing the effect of the energy equation in the zeroth-order solution to better characterize real gas properties due to the enthalpy variation along the seal cavities. The consideration of the energy equation allows for a better estimation of the coefficients in the case of a negative pre-swirl ratio, therefore, it extend the prediction fidelity over a wide range of operating conditions. The numeric results are also compared to the state-of-the-art bulk-flow model, which highlights the improvement in the model.
Waterman, R C; Caton, J S; Löest, C A; Petersen, M K; Roberts, A J
2014-07-01
Interannual variation of forage quantity and quality driven by precipitation events influence beef livestock production systems within the Southern and Northern Plains and Pacific West, which combined represent 60% (approximately 17.5 million) of the total beef cows in the United States. The beef cattle requirements published by the NRC are an important tool and excellent resource for both professionals and producers to use when implementing feeding practices and nutritional programs within the various production systems. The objectives of this paper include evaluation of the 1996 Beef NRC model in terms of effectiveness in predicting extensive range beef cow performance within arid and semiarid environments using available data sets, identifying model inefficiencies that could be refined to improve the precision of predicting protein supply and demand for range beef cows, and last, providing recommendations for future areas of research. An important addition to the current Beef NRC model would be to allow users to provide region-specific forage characteristics and the ability to describe supplement composition, amount, and delivery frequency. Beef NRC models would then need to be modified to account for the N recycling that occurs throughout a supplementation interval and the impact that this would have on microbial efficiency and microbial protein supply. The Beef NRC should also consider the role of ruminal and postruminal supply and demand of specific limiting AA. Additional considerations should include the partitioning effects of nitrogenous compounds under different physiological production stages (e.g., lactation, pregnancy, and periods of BW loss). The intent of information provided is to aid revision of the Beef NRC by providing supporting material for changes and identifying gaps in existing scientific literature where future research is needed to enhance the predictive precision and application of the Beef NRC models.
Directory of Open Access Journals (Sweden)
Obaid ur Rehman
2017-12-01
Full Text Available The performance of proton exchange membrane (PEM fuel cell majorly relies on properties of gas diffusion layer (GDL which supports heat and mass transfer across the membrane electrode assembly. A novel approach is adopted in this work to analyze the activity of GDL during fuel cell operation on a large-scale model. The model with mesh size of 1.3 million computational cells for 50 cm2 active area was simulated by parallel computing technique via computer cluster. Grid independence study showed less than 5% deviation in criterion parameter as mesh size was increased to 1.8 million cells. Good approximation was achieved as model was validated with the experimental data for Pt loading of 1 mg cm-2. The results showed that GDL with higher thermal conductivity prevented PEM from drying and led to improved protonic conduction. GDL with higher porosity enhanced the reaction but resulted in low output voltage which demonstrated the effect of contact resistance. In addition, reduced porosity under the rib regions was significant which resulted in lower gas diffusion and heat and water accumulation.
Evaluating the Predictive Value of Growth Prediction Models
Murphy, Daniel L.; Gaertner, Matthew N.
2014-01-01
This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…
Model predictive control classical, robust and stochastic
Kouvaritakis, Basil
2016-01-01
For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...
Directory of Open Access Journals (Sweden)
Vincent Frappier
2014-04-01
Full Text Available Normal mode analysis (NMA methods are widely used to study dynamic aspects of protein structures. Two critical components of NMA methods are coarse-graining in the level of simplification used to represent protein structures and the choice of potential energy functional form. There is a trade-off between speed and accuracy in different choices. In one extreme one finds accurate but slow molecular-dynamics based methods with all-atom representations and detailed atom potentials. On the other extreme, fast elastic network model (ENM methods with Cα-only representations and simplified potentials that based on geometry alone, thus oblivious to protein sequence. Here we present ENCoM, an Elastic Network Contact Model that employs a potential energy function that includes a pairwise atom-type non-bonded interaction term and thus makes it possible to consider the effect of the specific nature of amino-acids on dynamics within the context of NMA. ENCoM is as fast as existing ENM methods and outperforms such methods in the generation of conformational ensembles. Here we introduce a new application for NMA methods with the use of ENCoM in the prediction of the effect of mutations on protein stability. While existing methods are based on machine learning or enthalpic considerations, the use of ENCoM, based on vibrational normal modes, is based on entropic considerations. This represents a novel area of application for NMA methods and a novel approach for the prediction of the effect of mutations. We compare ENCoM to a large number of methods in terms of accuracy and self-consistency. We show that the accuracy of ENCoM is comparable to that of the best existing methods. We show that existing methods are biased towards the prediction of destabilizing mutations and that ENCoM is less biased at predicting stabilizing mutations.
Combining GPS measurements and IRI model predictions
International Nuclear Information System (INIS)
Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.
2002-01-01
The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations
A Global Model for Bankruptcy Prediction.
Alaminos, David; Del Castillo, Agustín; Fernández, Manuel Ángel
2016-01-01
The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy.
Soniat, Thomas M.; Conzelmann, Craig P.; Byrd, Jason D.; Roszell, Dustin P.; Bridevaux, Joshua L.; Suir, Kevin J.; Colley, Susan B.
2013-01-01
In an attempt to decelerate the rate of coastal erosion and wetland loss, and protect human communities, the state of Louisiana developed its Comprehensive Master Plan for a Sustainable Coast. The master plan proposes a combination of restoration efforts including shoreline protection, marsh creation, sediment diversions, and ridge, barrier island, and hydrological restoration. Coastal restoration projects, particularly the large-scale diversions of fresh water from the Mississippi River, needed to supply sediment to an eroding coast potentially impact oyster populations and oyster habitat. An oyster habitat suitability index model is presented that evaluates the effects of a proposed sediment and freshwater diversion into Lower Breton Sound. Voluminous freshwater, needed to suspend and broadly distribute river sediment, will push optimal salinities for oysters seaward and beyond many of the existing reefs. Implementation and operation of the Lower Breton Sound diversion structure as proposed would render about 6,173 ha of hard bottom immediately east of the Mississippi River unsuitable for the sustained cultivation of oysters. If historical harvests are to be maintained in this region, a massive and unprecedented effort to relocate private leases and restore oyster bottoms would be required. Habitat suitability index model results indicate that the appropriate location for such efforts are to the east and north of the Mississippi River Gulf Outlet.
Energy Technology Data Exchange (ETDEWEB)
Little, C.A.; Fields, D.E.; Emerson, C.J.; Hiromoto, G.
1981-09-01
PRESTO (Prediction of Radiation Effects from Shallow Trench Operations) is a computer code developed under US Environmental Protection Agency (EPA) funding to evaluate possible health effects from shallow land burial trenches. The model is intended to be generic and to assess radionuclide transport, ensuing exposure, and health impact to a static local population for a 1000-y period following the end of burial operations. Human exposure scenarios considered by the model include normal releases (including leaching and operational spillage), human intrusion, and site farming or reclamation. Pathways and processes of transit from the trench to an individual or population include: groundwater transport, overland flow, erosion, surface water dilution, resuspension, atmospheric transport, deposition, inhalation, and ingestion of contaminated beef, milk, crops, and water. Both population doses and individual doses are calculated as well as doses to the intruder and farmer.
Fingerprint verification prediction model in hand dermatitis.
Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah
2015-07-01
Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.
Massive Predictive Modeling using Oracle R Enterprise
CERN. Geneva
2014-01-01
R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...
Marco-Rius, Francisco; Caballero, Pablo; Morán, Paloma; Garcia de Leaniz, Carlos
2013-01-01
Fish growth is commonly used as a proxy for fitness but this is only valid if individual growth variation can be interpreted in relation to conspecifics' performance. Unfortunately, assessing individual variation in growth rates is problematic under natural conditions because subjects typically need to be marked, repeated measurements of body size are difficult to obtain in the field, and recaptures may be limited to a few time events which will generally vary among individuals. The analysis of consecutive growth rings (circuli) found on scales and other hard structures offers an alternative to mark and recapture for examining individual growth variation in fish and other aquatic vertebrates where growth rings can be visualized, but accounting for autocorrelations and seasonal growth stanzas has proved challenging. Here we show how mixed-effects modelling of scale growth increments (inter-circuli spacing) can be used to reconstruct the growth trajectories of sea trout (Salmo trutta) and correctly classify 89% of individuals into early or late seaward migrants (smolts). Early migrants grew faster than late migrants during their first year of life in freshwater in two natural populations, suggesting that migration into the sea was triggered by ontogenetic (intrinsic) drivers, rather than by competition with conspecifics. Our study highlights the profound effects that early growth can have on age at migration of a paradigmatic fish migrant and illustrates how the analysis of inter-circuli spacing can be used to reconstruct the detailed growth of individuals when these cannot be marked or are only caught once. PMID:23613922
International Nuclear Information System (INIS)
Palacios, Elias; Ferreri, J.C.
1985-01-01
Safety analyses assocciated with the elimination of radioactive wastes in rock masses assume, in all cases, the existance of wastes which will corrode the waste canisters producing the liberation of radionuclides in the rocky and their ultimate migration towards the biosphere. A conceptual discussion is presented which allows the specification to be met by the models for the prediction of the effects of the emplacement of a high level waste repository located at a depth of 500 m in a granitic rock. Furthermore, the radionuclides giving the largest contribution to the radiological impact are identified. (Author) [es
Risk prediction model: Statistical and artificial neural network approach
Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim
2017-04-01
Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.
Fabry, D A; Van Tasell, D J
1990-12-01
The Articulation Index (AI) was used to evaluate an "adaptive frequency response" (AFR) hearing aid with amplification characteristics that automatically change to become more high-pass with increasing levels of background noise. Speech intelligibility ratings of connected discourse by normal-hearing subjects were predicted well by an empirically derived AI transfer function. That transfer function was used to predict aided speech intelligibility ratings by 12 hearing-impaired subjects wearing a master hearing aid with the Argosy Manhattan Circuit enabled (AFR-on) or disabled (AFR-off). For all subjects, the AI predicted no improvements in speech intelligibility for the AFR-on versus AFR-off condition, and no significant improvements in rated intelligibility were observed. The ability of the AI to predict aided speech intelligibility varied across subjects. However, ratings from every hearing-impaired subject were related monotonically to AI. Therefore, AI calculations may be used to predict relative--but not absolute--levels of speech intelligibility produced under different amplification conditions.
Predictive Model of Systemic Toxicity (SOT)
In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...
Ronald E. McRoberts; Paolo Moser; Laio Zimermann Oliveira; Alexander C. Vibrans
2015-01-01
Forest inventory estimates of tree volume for large areas are typically calculated by adding the model predictions of volumes for individual trees at the plot level, calculating the mean over plots, and expressing the result on a per unit area basis. The uncertainty in the model predictions is generally ignored, with the result that the precision of the large-area...
Testicular Cancer Risk Prediction Models
Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Pancreatic Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Colorectal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Prostate Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Bladder Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Esophageal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Cervical Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Breast Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Lung Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Liver Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Ovarian Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Model-based uncertainty in species range prediction
DEFF Research Database (Denmark)
Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel
2006-01-01
algorithm when extrapolating beyond the range of data used to build the model. The effects of these factors should be carefully considered when using this modelling approach to predict species ranges. Main conclusions We highlight an important source of uncertainty in assessments of the impacts of climate......Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions......, identify key reasons why model output may differ and discuss the implications that model uncertainty has for policy-guiding applications. Location The Western Cape of South Africa. Methods We applied nine of the most widely used modelling techniques to model potential distributions under current...
Posterior Predictive Model Checking in Bayesian Networks
Crawford, Aaron
2014-01-01
This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…
Predicting and Modeling RNA Architecture
Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice
2011-01-01
SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963
Demonstrating the improvement of predictive maturity of a computational model
Energy Technology Data Exchange (ETDEWEB)
Hemez, Francois M [Los Alamos National Laboratory; Unal, Cetin [Los Alamos National Laboratory; Atamturktur, Huriye S [CLEMSON UNIV.
2010-01-01
We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smaller discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.
Multiple Steps Prediction with Nonlinear ARX Models
Zhang, Qinghua; Ljung, Lennart
2007-01-01
NLARX (NonLinear AutoRegressive with eXogenous inputs) models are frequently used in black-box nonlinear system identication. Though it is easy to make one step ahead prediction with such models, multiple steps prediction is far from trivial. The main difficulty is that in general there is no easy way to compute the mathematical expectation of an output conditioned by past measurements. An optimal solution would require intensive numerical computations related to nonlinear filltering. The pur...
Predictability of extreme values in geophysical models
Directory of Open Access Journals (Sweden)
A. E. Sterk
2012-09-01
Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.
Applications of modeling in polymer-property prediction
Case, F. H.
1996-08-01
A number of molecular modeling techniques have been applied for the prediction of polymer properties and behavior. Five examples illustrate the range of methodologies used. A simple atomistic simulation of small polymer fragments is used to estimate drug compatibility with a polymer matrix. The analysis of molecular dynamics results from a more complex model of a swollen hydrogel system is used to study gas diffusion in contact lenses. Statistical mechanics are used to predict conformation dependent properties — an example is the prediction of liquid-crystal formation. The effect of the molecular weight distribution on phase separation in polyalkanes is predicted using thermodynamic models. In some cases, the properties of interest cannot be directly predicted using simulation methods or polymer theory. Correlation methods may be used to bridge the gap between molecular structure and macroscopic properties. The final example shows how connectivity-indices-based quantitative structure-property relationships were used to predict properties for candidate polyimids in an electronics application.
International Nuclear Information System (INIS)
Ko, Junghyuk; Mohtaram, Nima Khadem; Willerth, Stephanie M; Jun, Martin B G; Lee, Patrick C
2015-01-01
Melt electrospinning can be used to fabricate various fibrous biomaterial scaffolds with a range of mechanical properties and varying topographical properties for different applications such as tissue scaffold and filtration and etc, making it a powerful technique. Engineering the topography of such electrospun microfibers can be easily done by tuning the operational parameters of this process. Recent experimental studies have shown promising results for fabricating various topographies, but there is no body of work that focuses on using mathematical models of this technique to further understand the effect of operational parameters on these properties of microfiber scaffolds. In this study, we developed a novel mathematical model using numerical simulations to demonstrate the effect of temperature, feed rate and flow rate on controlling topographical properties such as fiber diameter of these spun fibrous scaffolds. These promising modelling results are also compared to our previous and current experimental results. Overall, we show that our novel mathematical model can predict the topographical properties affected by key operational parameters such as change in temperature, flow rate and feed rate, and this model could serve as a promising strategy for the controlling of topographical properties of such structures for different applications. (paper)
Quantifying predictive accuracy in survival models.
Lirette, Seth T; Aban, Inmaculada
2017-12-01
For time-to-event outcomes in medical research, survival models are the most appropriate to use. Unlike logistic regression models, quantifying the predictive accuracy of these models is not a trivial task. We present the classes of concordance (C) statistics and R 2 statistics often used to assess the predictive ability of these models. The discussion focuses on Harrell's C, Kent and O'Quigley's R 2 , and Royston and Sauerbrei's R 2 . We present similarities and differences between the statistics, discuss the software options from the most widely used statistical analysis packages, and give a practical example using the Worcester Heart Attack Study dataset.
Predictive power of nuclear-mass models
Directory of Open Access Journals (Sweden)
Yu. A. Litvinov
2013-12-01
Full Text Available Ten different theoretical models are tested for their predictive power in the description of nuclear masses. Two sets of experimental masses are used for the test: the older set of 2003 and the newer one of 2011. The predictive power is studied in two regions of nuclei: the global region (Z, N ≥ 8 and the heavy-nuclei region (Z ≥ 82, N ≥ 126. No clear correlation is found between the predictive power of a model and the accuracy of its description of the masses.
Return Predictability, Model Uncertainty, and Robust Investment
DEFF Research Database (Denmark)
Lukas, Manuel
Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....
Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.
2015-01-01
We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167
Accuracy assessment of landslide prediction models
International Nuclear Information System (INIS)
Othman, A N; Mohd, W M N W; Noraini, S
2014-01-01
The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones
Digital Repository Service at National Institute of Oceanography (India)
Patil, S.G.; Mandal, S.; Hegde, A.V.
and yi is the predicted value. Step 3. (New population): In this step new population is created by repeating the following steps until the new population is complete i) [Selection]: In the present study, two parent chromosomes from a population... are selected according to fitness function (eqn. 14). The roulette wheel selection principle (Holland, 1975) is used to select chromosomes for reproduction ii) [Crossover]: Here with crossover probability crossover of the parents is done to form new offspring’s...
Towards a generalized energy prediction model for machine tools.
Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan
2017-04-01
Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.
Poisson Mixture Regression Models for Heart Disease Prediction
Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611
Adding propensity scores to pure prediction models fails to improve predictive performance
Directory of Open Access Journals (Sweden)
Amy S. Nowacki
2013-08-01
Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.
Kassemi, M.; Thompson, D.; Goodenow, D.; Gokoglu, S.; Myers, J.
2016-01-01
Renal stone disease is not only a concern on earth but can conceivably pose a serious risk to the astronauts health and safety in Space. In this work, two different deterministic models based on a Population Balance Equation (PBE) analysis of renal stone formation are developed to assess the risks of critical renal stone incidence for astronauts during space travel. In the first model, the nephron is treated as a continuous mixed suspension mixed product removal crystallizer and the PBE for the nucleating, growing and agglomerating renal calculi is coupled to speciation calculations performed by JESS. Predictions of stone size distributions in the kidney using this model indicate that the astronaut in microgravity is at noticeably greater but still subcritical risk and recommend administration of citrate and augmented hydration as effective means of minimizing and containing this risk. In the second model, the PBE analysis is coupled to a Computational Fluid Dynamics (CFD) model for flow of urine and transport of Calcium and Oxalate in the nephron to predict the impact of gravity on the stone size distributions. Results presented for realistic 3D tubule and collecting duct geometries, clearly indicate that agglomeration is the primary mode of size enhancement in both 1g and microgravity. 3D numerical simulations seem to further indicate that there will be an increased number of smaller stones developed in microgravity that will likely pass through the nephron in the absence of wall adhesion. However, upon reentry to a 1g (Earth) or 38g (Mars) partial gravitational fields, the renal calculi can lag behind the urinary flow in tubules that are adversely oriented with respect to the gravitational field and grow agglomerate to large sizes that are sedimented near the wall with increased propensity for wall adhesion, plaque formation, and risk to the astronauts.
Predicting Protein Secondary Structure with Markov Models
DEFF Research Database (Denmark)
Fischer, Paul; Larsen, Simon; Thomsen, Claus
2004-01-01
we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....
Energy based prediction models for building acoustics
DEFF Research Database (Denmark)
Brunskog, Jonas
2012-01-01
In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....
Comparative Study of Bancruptcy Prediction Models
Directory of Open Access Journals (Sweden)
Isye Arieshanti
2013-09-01
Full Text Available Early indication of bancruptcy is important for a company. If companies aware of potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%
Weis, A.E.; Hochberg, M.E.
2000-01-01
We constructed a model to investigate conditions under which intraspecific competition amplifies or diminishes the selective advantage to resistance. The growth trajectories of competing individual plants were depicted by logistic difference equations that incorporated basic costs (lowered growth
Development of the neurovascular unit (NVU) involves interactions between endothelial cells, pericytes, neuroprogenitor cells, and microglia. We constructed an in silico model of the developing neuroepithelium in CompuCell3D which recapitulated a suite of critical signaling pathw...
Prediction Models for Dynamic Demand Response
Energy Technology Data Exchange (ETDEWEB)
Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.
2015-11-02
As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D^{2}R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D^{2}R, which we address in this paper. Our first contribution is the formal definition of D^{2}R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D^{2}R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D^{2}R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D^{2}R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D^{2}R. Also, prediction models require just few days’ worth of data indicating that small amounts of
Are animal models predictive for humans?
Directory of Open Access Journals (Sweden)
Greek Ray
2009-01-01
Full Text Available Abstract It is one of the central aims of the philosophy of science to elucidate the meanings of scientific terms and also to think critically about their application. The focus of this essay is the scientific term predict and whether there is credible evidence that animal models, especially in toxicology and pathophysiology, can be used to predict human outcomes. Whether animals can be used to predict human response to drugs and other chemicals is apparently a contentious issue. However, when one empirically analyzes animal models using scientific tools they fall far short of being able to predict human responses. This is not surprising considering what we have learned from fields such evolutionary and developmental biology, gene regulation and expression, epigenetics, complexity theory, and comparative genomics.
Evaluation of CASP8 model quality predictions
Cozzetto, Domenico
2009-01-01
The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.
Energy Technology Data Exchange (ETDEWEB)
Karuthapandi, Sripriyan; Thyla, P. R. [PSG College of Technology, Coimbatore (India); Ramu, Murugan [Amrita University, Ettimadai (India)
2017-05-15
This paper describes the relationships between the macrostructural characteristics of weld beads and the welding parameters in Gas metal arc welding (GMAW) using a flat wire electrode. Bead-on-plate welds were produced with a flat wire electrode and different combinations of input parameters (i.e., welding current, welding speed, and flat wire electrode orientation). The macrostructural characteristics of the weld beads, namely, deposition, bead width, total bead width, reinforcement height, penetration depth, and depth of HAZ were investigated. A mapping technique was employed to measure these characteristics in various segments of the weldment zones. Results show that the use of a flat wire electrode improves the depth-to-width (D/W) ratio by 16.5 % on average compared with the D/W ratio when a regular electrode is used in GMAW. Furthermore, a fuzzy logic model was established to predict the effects of the use of a flat electrode on the weldment shape profile with varying input parameters. The predictions of the model were compared with the experimental results.
Frimpter, M.H.; Donohue, J.J.; Rapacz, M.V.; Beye, H.G.
1990-01-01
A mass-balance accounting model can be used to guide the management of septic systems and fertilizers to control the degradation of groundwater quality in zones of an aquifer that contributes water to public supply wells. The nitrate nitrogen concentration of the mixture in the well can be predicted for steady-state conditions by calculating the concentration that results from the total weight of nitrogen and total volume of water entering the zone of contribution to the well. These calculations will allow water-quality managers to predict the nitrate concentrations that would be produced by different types and levels of development, and to plan development accordingly. Computations for different development schemes provide a technical basis for planners and managers to compare water quality effects and to select alternatives that limit nitrate concentration in wells. Appendix A contains tables of nitrate loads and water volumes from common sources for use with the accounting model. Appendix B describes the preparation of a spreadsheet for the nitrate loading calculations with a software package generally available for desktop computers. (USGS)
Baciocchi, Renato; Berardi, Simona; Verginelli, Iason
2010-09-15
Clean-up of contaminated sites is usually based on a risk-based approach for the definition of the remediation goals, which relies on the well known ASTM-RBCA standard procedure. In this procedure, migration of contaminants is described through simple analytical models and the source contaminants' concentration is supposed to be constant throughout the entire exposure period, i.e. 25-30 years. The latter assumption may often result over-protective of human health, leading to unrealistically low remediation goals. The aim of this work is to propose an alternative model taking in account the source depletion, while keeping the original simplicity and analytical form of the ASTM-RBCA approach. The results obtained by the application of this model are compared with those provided by the traditional ASTM-RBCA approach, by a model based on the source depletion algorithm of the RBCA ToolKit software and by a numerical model, allowing to assess its feasibility for inclusion in risk analysis procedures. The results discussed in this work are limited to on-site exposure to contaminated water by ingestion, but the approach proposed can be extended to other exposure pathways. Copyright 2010 Elsevier B.V. All rights reserved.
Model predictive controller design of hydrocracker reactors
GÖKÇE, Dila
2014-01-01
This study summarizes the design of a Model Predictive Controller (MPC) in Tüpraş, İzmit Refinery Hydrocracker Unit Reactors. Hydrocracking process, in which heavy vacuum gasoil is converted into lighter and valuable products at high temperature and pressure is described briefly. Controller design description, identification and modeling studies are examined and the model variables are presented. WABT (Weighted Average Bed Temperature) equalization and conversion increase are simulate...
DEFF Research Database (Denmark)
May, Margaret; Sterne, Jonathan A C; Shipley, Martin
2007-01-01
Many HIV-infected patients on highly active antiretroviral therapy (HAART) experience metabolic complications including dyslipidaemia and insulin resistance, which may increase their coronary heart disease (CHD) risk. We developed a prognostic model for CHD tailored to the changes in risk factors...
Witte, J.P.M.; Bartholomeus, R.P.; van Bodegom, P.M.; Cirkel, D.G.; van Ek, R.; Janssen, G.M.C.M.; Spek, T.J.; Runhaar, H.
2015-01-01
Climate change may hamper the preservation of nature targets, but may create new potential hotspots of biodiversity as well. To timely design adequate measures, information is needed about the feasibility of nature targets under a future climate. Habitat distribution models may provide this, but
Advanced Refractive Effects Prediction System (AREPS)
National Research Council Canada - National Science Library
Patterson, Wayne L
2001-01-01
...), the world's first electromagnetic prediction system for shipboard use. Advances in research and technology have led to the replacement of IREPS with the Advanced Refractive Effects Prediction System (AREPS...
Model for predicting mountain wave field uncertainties
Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal
2017-04-01
Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of
Multi-Model Ensemble Wake Vortex Prediction
Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.
2015-01-01
Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.
Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco
2017-10-01
The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.
Mirmehrabi, Mahmoud; Rohani, Sohrab; Perry, Luisa
2006-04-01
A new activity coefficient model was developed from excess Gibbs free energy in the form G(ex) = cA(a) x(1)(b)...x(n)(b). The constants of the proposed model were considered to be function of solute and solvent dielectric constants, Hildebrand solubility parameters and specific volumes of solute and solvent molecules. The proposed model obeys the Gibbs-Duhem condition for activity coefficient models. To generalize the model and make it as a purely predictive model without any adjustable parameters, its constants were found using the experimental activity coefficient and physical properties of 20 vapor-liquid systems. The predictive capability of the proposed model was tested by calculating the activity coefficients of 41 binary vapor-liquid equilibrium systems and showed good agreement with the experimental data in comparison with two other predictive models, the UNIFAC and Hildebrand models. The only data used for the prediction of activity coefficients, were dielectric constants, Hildebrand solubility parameters, and specific volumes of the solute and solvent molecules. Furthermore, the proposed model was used to predict the activity coefficient of an organic compound, stearic acid, whose physical properties were available in methanol and 2-butanone. The predicted activity coefficient along with the thermal properties of the stearic acid were used to calculate the solubility of stearic acid in these two solvents and resulted in a better agreement with the experimental data compared to the UNIFAC and Hildebrand predictive models.
PREDICTIVE CAPACITY OF ARCH FAMILY MODELS
Directory of Open Access Journals (Sweden)
Raphael Silveira Amaro
2016-03-01
Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.
A revised prediction model for natural conception.
Bensdorp, Alexandra J; van der Steeg, Jan Willem; Steures, Pieternel; Habbema, J Dik F; Hompes, Peter G A; Bossuyt, Patrick M M; van der Veen, Fulco; Mol, Ben W J; Eijkemans, Marinus J C
2017-06-01
One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis was to assess whether additional predictors can refine the Hunault model and extend its applicability. Consecutive subfertile couples with unexplained and mild male subfertility presenting in fertility clinics were asked to participate in a prospective cohort study. We constructed a multivariable prediction model with the predictors from the Hunault model and new potential predictors. The primary outcome, natural conception leading to an ongoing pregnancy, was observed in 1053 women of the 5184 included couples (20%). All predictors of the Hunault model were selected into the revised model plus an additional seven (woman's body mass index, cycle length, basal FSH levels, tubal status,history of previous pregnancies in the current relationship (ongoing pregnancies after natural conception, fertility treatment or miscarriages), semen volume, and semen morphology. Predictions from the revised model seem to concur better with observed pregnancy rates compared with the Hunault model; c-statistic of 0.71 (95% CI 0.69 to 0.73) compared with 0.59 (95% CI 0.57 to 0.61). Copyright © 2017. Published by Elsevier Ltd.
Comparing National Water Model Inundation Predictions with Hydrodynamic Modeling
Egbert, R. J.; Shastry, A.; Aristizabal, F.; Luo, C.
2017-12-01
The National Water Model (NWM) simulates the hydrologic cycle and produces streamflow forecasts, runoff, and other variables for 2.7 million reaches along the National Hydrography Dataset for the continental United States. NWM applies Muskingum-Cunge channel routing which is based on the continuity equation. However, the momentum equation also needs to be considered to obtain better estimates of streamflow and stage in rivers especially for applications such as flood inundation mapping. Simulation Program for River NeTworks (SPRNT) is a fully dynamic model for large scale river networks that solves the full nonlinear Saint-Venant equations for 1D flow and stage height in river channel networks with non-uniform bathymetry. For the current work, the steady-state version of the SPRNT model was leveraged. An evaluation on SPRNT's and NWM's abilities to predict inundation was conducted for the record flood of Hurricane Matthew in October 2016 along the Neuse River in North Carolina. This event was known to have been influenced by backwater effects from the Hurricane's storm surge. Retrospective NWM discharge predictions were converted to stage using synthetic rating curves. The stages from both models were utilized to produce flood inundation maps using the Height Above Nearest Drainage (HAND) method which uses the local relative heights to provide a spatial representation of inundation depths. In order to validate the inundation produced by the models, Sentinel-1A synthetic aperture radar data in the VV and VH polarizations along with auxiliary data was used to produce a reference inundation map. A preliminary, binary comparison of the inundation maps to the reference, limited to the five HUC-12 areas of Goldsboro, NC, yielded that the flood inundation accuracies for NWM and SPRNT were 74.68% and 78.37%, respectively. The differences for all the relevant test statistics including accuracy, true positive rate, true negative rate, and positive predictive value were found
Directory of Open Access Journals (Sweden)
Saeid Komasi
2016-09-01
Full Text Available Introduction: Studies on behavioral patterns and personality traits play a critical role in the prediction of healthy or unhealthy behaviors and identification of high-risk individuals for cardiovascular diseases (CVDs in order to implement preventive strategies. This study aimed to compare personality types in individuals with and without CVD based on the enneagram of personality. Materials and Methods: This case-control study was conducted on 96 gender-matched participants (48 CVD patients and 48 healthy subjects.Data were collected using the Riso-Hudson Enneagram Type Indicator (RHETI. Data analysis was performed in SPSS V.20 using MANOVA, Chi-square, and T-test. Results: After adjustment for age and gender there is a significant difference between two groups (and male in term of personality types one and five. In CVD patients, score of personality type one (F(1,94=9.476 (P=0.003 was significantly higher, while score of personality type five was significantly lower (F(1,94=6.231 (P=0.014, compared to healthy subjects. However, this significant difference was only observed in the score of personality type one in female patients (F(1,66=4.382 (P=0.04. Conclusion: Identifying healthy personality type one individuals before CVD development, providing necessary training on the potential risk factors of CVDs, and implementation of preventive strategies (e.g., anger management skills could lead to positive outcomes for the society and healthcare system. It is recommended that further investigation be conducted in this regard.
Dermody, Sarah S; Thomas, Katherine M; Hopwood, Christopher J; Durbin, C Emily; Wright, Aidan G C
2017-06-01
This paper demonstrates a recently-popularized quantitative method, the time-varying effect model (TVEM), in describing dynamic, momentary interpersonal processes implicated by Interpersonal Theory. We investigated moment-to-moment complementarity in affiliation and control behaviors (i.e., correspondence in affiliation and reciprocity in control between married dyad members) in a five-minute interaction (N=135), and how complementarity changed over time. Overall, results supported complementarity in affiliation and control. Moreover, effects were time-varying: Complementarity in affiliation increased over time and complementary in control changed over time in a cyclical manner. Dyadic adjustment moderated the strength in complementarity in control during specific timeframes. We discuss implications of these results and future directions. The findings support the utility of TVEM for studying dynamic and time-dependent interpersonal processes.
Verwei, M.; Burgsteden, J.A. van; Krul, C.A.M.; Sandt, J.J.M. van de; Freidig, A.P.
2006-01-01
The new EU legislations for chemicals (Registration, Evaluation and Authorization of Chemicals, REACH) and cosmetics (Seventh Amendment) stimulate the acceptance of in vitro and in silico approaches to test chemicals for their potential to cause reproductive effects. In the current study seven
Probabilistic Modeling and Visualization for Bankruptcy Prediction
DEFF Research Database (Denmark)
Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara
2017-01-01
In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...... visualization to improve our understanding of the different attained performances, effectively compiling all the conducted experiments in a meaningful way. We complete our study with an entropy-based analysis that highlights the uncertainty handling properties provided by the GP, crucial for prediction tasks...
DEFF Research Database (Denmark)
Pietroni, Carlotta; Andersen, Jeppe D.; Johansen, Peter
2014-01-01
In two recent studies of Spanish individuals [1,2], gender was suggested as a factor that contributes to human eye colour variation. However, gender did not improve the predictive accuracy on blue, intermediate and brown eye colours when gender was included in the IrisPlex model [3]. In this study......, we investigate the role of gender as a factor that contributes to eye colour variation and suggest that the gender effect on eye colour is population specific. A total of 230 Italian individuals were typed for the six IrisPlex SNPs (rs12913832, rs1800407, rs12896399, rs1393350, rs16891982 and rs...... eye colour independently of ancestry. Furthermore, we found gender to be significantly associated with quantitative eye colour measurements in the Italian population sample. We found that the association was statistically significant only among Italian individuals typed as heterozygote GA for HERC2 rs...
On the Predictiveness of Single-Field Inflationary Models
Burgess, C.P.; Trott, Michael
2014-01-01
We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for $A_s$, $r$ and $n_s$ are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in prin...
Accident Prediction Models for Akure – Ondo Carriageway, Ondo ...
African Journals Online (AJOL)
FIRST LADY
traffic exposure and intersection effects as independent variables. They suggested that the Poisson distribution allows for the relationship between exposure and crashes to be more accurately modeled as opposed to. Accident Prediction Models for Akure-Ondo Carriageway…Using Multiple Linear Regression ...
Predictive QSAR Models for the Toxicity of Disinfection Byproducts.
Qin, Litang; Zhang, Xin; Chen, Yuhan; Mo, Lingyun; Zeng, Honghu; Liang, Yanpeng
2017-10-09
Several hundred disinfection byproducts (DBPs) in drinking water have been identified, and are known to have potentially adverse health effects. There are toxicological data gaps for most DBPs, and the predictive method may provide an effective way to address this. The development of an in-silico model of toxicology endpoints of DBPs is rarely studied. The main aim of the present study is to develop predictive quantitative structure-activity relationship (QSAR) models for the reactive toxicities of 50 DBPs in the five bioassays of X-Microtox, GSH+, GSH-, DNA+ and DNA-. All-subset regression was used to select the optimal descriptors, and multiple linear-regression models were built. The developed QSAR models for five endpoints satisfied the internal and external validation criteria: coefficient of determination ( R ²) > 0.7, explained variance in leave-one-out prediction ( Q ² LOO ) and in leave-many-out prediction ( Q ² LMO ) > 0.6, variance explained in external prediction ( Q ² F1 , Q ² F2 , and Q ² F3 ) > 0.7, and concordance correlation coefficient ( CCC ) > 0.85. The application domains and the meaning of the selective descriptors for the QSAR models were discussed. The obtained QSAR models can be used in predicting the toxicities of the 50 DBPs.
Predictive QSAR Models for the Toxicity of Disinfection Byproducts
Directory of Open Access Journals (Sweden)
Litang Qin
2017-10-01
Full Text Available Several hundred disinfection byproducts (DBPs in drinking water have been identified, and are known to have potentially adverse health effects. There are toxicological data gaps for most DBPs, and the predictive method may provide an effective way to address this. The development of an in-silico model of toxicology endpoints of DBPs is rarely studied. The main aim of the present study is to develop predictive quantitative structure–activity relationship (QSAR models for the reactive toxicities of 50 DBPs in the five bioassays of X-Microtox, GSH+, GSH−, DNA+ and DNA−. All-subset regression was used to select the optimal descriptors, and multiple linear-regression models were built. The developed QSAR models for five endpoints satisfied the internal and external validation criteria: coefficient of determination (R2 > 0.7, explained variance in leave-one-out prediction (Q2LOO and in leave-many-out prediction (Q2LMO > 0.6, variance explained in external prediction (Q2F1, Q2F2, and Q2F3 > 0.7, and concordance correlation coefficient (CCC > 0.85. The application domains and the meaning of the selective descriptors for the QSAR models were discussed. The obtained QSAR models can be used in predicting the toxicities of the 50 DBPs.
Directory of Open Access Journals (Sweden)
Peter Hoonakker
2014-01-01
Full Text Available High employee turnover has always been a major issue for Information Technology (IT. In particular, turnover of women is very high. In this study, we used the Job Demand/Resources (JD-R model to examine the relationship between job demands and job resources, stress/burnout and job satisfaction/commitment, and turnover intention and tested the model for gender differences. Data were collected in five IT companies. A sample of 624 respondents (return rate: 56%; 54% males; mean age: 39.7 years was available for statistical analyses. Results of our study show that relationships between job demands and turnover intention are mediated by emotional exhaustion (burnout and relationships between job resources and turnover intention are mediated by job satisfaction. We found noticeable gender differences in these relationships, which can explain differences in turnover intention between male and female employees. The results of our study have consequences for organizational retention strategies to keep men and women in the IT work force.
Tesfa, Belachew; Mishra, Rakesh; Gu, Fengshou; Powles, Nicholas
2010-01-01
Biodiesel is a promising non-toxic and biodegradable alternative fuel used in the transport sector. Nevertheless, the higher viscosity and density of biodiesel poses some acute problems when it is used it in unmodified engine. Taking this into consideration, this study has been focused towards two objectives. The first objective is to identify the effect of temperature on density and viscosity for a variety of biodiesels and also to develop a correlation between density and viscosity for thes...
Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models
Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A.; Burgueño, Juan; Pérez-Rodríguez, Paulino; de los Campos, Gustavo
2016-01-01
The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects (u) that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model (u) plus an extra component, f, that captures random effects between environments that were not captured by the random effects u. We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u and f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u. PMID:27793970
Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models
Directory of Open Access Journals (Sweden)
Jaime Cuevas
2017-01-01
Full Text Available The phenomenon of genotype × environment (G × E interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects ( u that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP and Gaussian (Gaussian kernel, GK. The other model has the same genetic component as the first model ( u plus an extra component, f, that captures random effects between environments that were not captured by the random effects u . We used five CIMMYT data sets (one maize and four wheat that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u and f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u .
Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models.
Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A; Burgueño, Juan; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo
2017-01-05
The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects [Formula: see text] that can be assessed by the Kronecker product of variance-covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model [Formula: see text] plus an extra component, F: , that captures random effects between environments that were not captured by the random effects [Formula: see text] We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with [Formula: see text] over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect [Formula: see text]. Copyright © 2017 Cuevas et al.
Wireless model predictive control: Application to water-level system
Directory of Open Access Journals (Sweden)
Ramdane Hedjar
2016-04-01
Full Text Available This article deals with wireless model predictive control of a water-level control system. The objective of the model predictive control algorithm is to constrain the control signal inside saturation limits and maintain the water level around the desired level. Linear modeling of any nonlinear plant leads to parameter uncertainties and non-modeled dynamics in the linearized mathematical model. These uncertainties induce a steady-state error in the output response of the water level. To eliminate this steady-state error and increase the robustness of the control algorithm, an integral action is included in the closed loop. To control the water-level system remotely, the communication between the controller and the process is performed using radio channel. To validate the proposed scheme, simulation and real-time implementation of the algorithm have been conducted, and the results show the effectiveness of wireless model predictive control with integral action.
Pietroni, Carlotta; Andersen, Jeppe D; Johansen, Peter; Andersen, Mikkel M; Harder, Stine; Paulsen, Rasmus; Børsting, Claus; Morling, Niels
2014-07-01
In two recent studies of Spanish individuals, gender was suggested as a factor that contributes to human eye colour variation. However, gender did not improve the predictive accuracy on blue, intermediate and brown eye colours when gender was included in the IrisPlex model. In this study, we investigate the role of gender as a factor that contributes to eye colour variation and suggest that the gender effect on eye colour is population specific. A total of 230 Italian individuals were typed for the six IrisPlex SNPs (rs12913832, rs1800407, rs12896399, rs1393350, rs16891982 and rs12203592). A quantitative eye colour score (Pixel Index of the Eye: PIE-score) was calculated based on digital eye images using the custom made DIAT software. The results were compared with those of Danish and Swedish population samples. As expected, we found HERC2 rs12913832 as the main predictor of human eye colour independently of ancestry. Furthermore, we found gender to be significantly associated with quantitative eye colour measurements in the Italian population sample. We found that the association was statistically significant only among Italian individuals typed as heterozygote GA for HERC2 rs12913832. Interestingly, we did not observe the same association in the Danish and Swedish population. This indicated that the gender effect on eye colour is population specific. We estimated the effect of gender on quantitative eye colour in the Italian population sample to be 4.9%. Among gender and the IrisPlex SNPs, gender ranked as the second most important predictor of human eye colour variation in Italians after HERC2 rs12913832. We, furthermore, tested the five lower ranked IrisPlex predictors, and evaluated all possible 3(6) (729) genotype combinations of the IrisPlex assay and their corresponding predictive values using the IrisPlex prediction model [4]. The results suggested that maximum three (rs12913832, rs1800407, rs16891982) of the six IrisPlex SNPs are useful in practical
Modelling the predictive performance of credit scoring
Directory of Open Access Journals (Sweden)
Shi-Wei Shen
2013-07-01
Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.
Predicted solar cell edge radiation effects
International Nuclear Information System (INIS)
Gates, M.T.
1993-01-01
The Advanced Solar Cell Orbital Test (ASCOT) will test six types of solar cells in a high energy proton environment. During the design of the experiment a question was raised about the effects of proton radiation incident on the edge of the solar cells and whether edge radiation shielding was required. Historical geosynchronous data indicated that edge radiation damage is not detectable over the normal end of life solar cell degradation; however because the ASCOT radiation environment has a much higher and more energetic fluence of protons, considerably more edge damage is expected. A computer analysis of the problem was made by modeling the expected radiation damage at the cell edge and using a network model of small interconnected solar cells to predict degradation in the cell's electrical output. The model indicated that the deepest penetration of edge radiation was at the top of the cell near the junction where the protons have access to the cell through the low density cell/cover adhesive layer. The network model indicated that the cells could tolerate high fluences at their edge as long as there was high electrical resistance between the edge radiated region and the contact system on top of the cell. The predicted edge radiation related loss was less than 2% of maximum power for GaAs/Ge solar cells. As a result, no edge radiation protection was used for ASCOT
Model Predictive Control of Sewer Networks
DEFF Research Database (Denmark)
Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik
2016-01-01
The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....... and controlled have thus become essential factors for efficient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona...
Bayesian Predictive Models for Rayleigh Wind Speed
DEFF Research Database (Denmark)
Shahirinia, Amir; Hajizadeh, Amin; Yu, David C
2017-01-01
predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior......One of the major challenges with the increase in wind power generation is the uncertain nature of wind speed. So far the uncertainty about wind speed has been presented through probability distributions. Also the existing models that consider the uncertainty of the wind speed primarily view...
Comparison of two ordinal prediction models
DEFF Research Database (Denmark)
Kattan, Michael W; Gerds, Thomas A
2015-01-01
system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....
Predicting water main failures using Bayesian model averaging and survival modelling approach
International Nuclear Information System (INIS)
Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan
2015-01-01
To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure
Predictive modeling in homogeneous catalysis: a tutorial
Maldonado, A.G.; Rothenberg, G.
2010-01-01
Predictive modeling has become a practical research tool in homogeneous catalysis. It can help to pinpoint ‘good regions’ in the catalyst space, narrowing the search for the optimal catalyst for a given reaction. Just like any other new idea, in silico catalyst optimization is accepted by some
Feedback model predictive control by randomized algorithms
Batina, Ivo; Stoorvogel, Antonie Arij; Weiland, Siep
2001-01-01
In this paper we present a further development of an algorithm for stochastic disturbance rejection in model predictive control with input constraints based on randomized algorithms. The algorithm presented in our work can solve the problem of stochastic disturbance rejection approximately but with
A Robustly Stabilizing Model Predictive Control Algorithm
Ackmece, A. Behcet; Carson, John M., III
2007-01-01
A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.
Hierarchical Model Predictive Control for Resource Distribution
DEFF Research Database (Denmark)
Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob
2010-01-01
This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...
Model Predictive Control based on Finite Impulse Response Models
DEFF Research Database (Denmark)
Prasath, Guru; Jørgensen, John Bagterp
2008-01-01
We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations ...
Using Pareto points for model identification in predictive toxicology
2013-01-01
Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649
Semantic Similarity, Predictability, and Models of Sentence Processing
Roland, Douglas; Yun, Hongoak; Koenig, Jean-Pierre; Mauner, Gail
2012-01-01
The effects of word predictability and shared semantic similarity between a target word and other words that could have taken its place in a sentence on language comprehension are investigated using data from a reading time study, a sentence completion study, and linear mixed-effects regression modeling. We find that processing is facilitated if…
Disease prediction models and operational readiness.
Directory of Open Access Journals (Sweden)
Courtney D Corley
Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology
Caries risk assessment models in caries prediction
Directory of Open Access Journals (Sweden)
Amila Zukanović
2013-11-01
Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.
Denkins, P.; Badhwar, G.; Obot, V.; Wilson, B.; Jejelewo, O.
2001-01-01
NASA is very interested in improving its ability to monitor and forecast the radiation levels that pose a health risk to space-walking astronauts as they construct the International Space Station and astronauts that will participate in long-term and deep-space missions. Human exploratory missions to the moon and Mars within the next quarter century, will expose crews to transient radiation from solar particle events which include high-energy galactic cosmic rays and high-energy protons. Because the radiation levels in space are high and solar activity is presently unpredictable, adequate shielding is needed to minimize the deleterious health effects of exposure to radiation. Today, numerous models have been developed and used to predict radiation exposure. Such a model is the Space Environment Information Systems (SPENVIS) modeling program, developed by the Belgian Institute for Space Aeronautics. SPENVIS, which has been assessed to be an excellent tool in characterizing the radiation environment for microelectronics and investigating orbital debris, is being evaluated for its usefulness with determining the dose and dose-equivalent for human exposure. Thus far. the calculations for dose-depth relations under varying shielding conditions have been in agreement with calculations done using HZETRN and PDOSE, which are well-known and widely used models for characterizing the environments for human exploratory missions. There is disagreement when assessing the impact of secondary radiation particles since SPENVIS does a crude estimation of the secondary radiation particles when calculating LET versus Flux. SPENVIS was used to model dose-depth relations for the blood-forming organs. Radiation sickness and cancer are life-threatening consequences resulting from radiation exposure. In space. exposure to radiation generally includes all of the critical organs. Biological and toxicological impacts have been included for discussion along with alternative risk mitigation
Predicting_Systemic_Toxicity_Effects_ArchTox_2017_Data
U.S. Environmental Protection Agency — In an effort to address a major challenge in chemical safety assessment, alternative approaches for characterizing systemic effect levels, a predictive model was...
Penny, Melissa A; Verity, Robert; Bever, Caitlin A; Sauboin, Christophe; Galactionova, Katya; Flasche, Stefan; White, Michael T; Wenger, Edward A; Van de Velde, Nicolas; Pemberton-Ross, Peter; Griffin, Jamie T; Smith, Thomas A; Eckhoff, Philip A; Muhib, Farzana; Jit, Mark; Ghani, Azra C
2016-01-23
The phase 3 trial of the RTS,S/AS01 malaria vaccine candidate showed modest efficacy of the vaccine against Plasmodium falciparum malaria, but was not powered to assess mortality endpoints. Impact projections and cost-effectiveness estimates for longer timeframes than the trial follow-up and across a range of settings are needed to inform policy recommendations. We aimed to assess the public health impact and cost-effectiveness of routine use of the RTS,S/AS01 vaccine in African settings. We compared four malaria transmission models and their predictions to assess vaccine cost-effectiveness and impact. We used trial data for follow-up of 32 months or longer to parameterise vaccine protection in the group aged 5-17 months. Estimates of cases, deaths, and disability-adjusted life-years (DALYs) averted were calculated over a 15 year time horizon for a range of levels of Plasmodium falciparum parasite prevalence in 2-10 year olds (PfPR2-10; range 3-65%). We considered two vaccine schedules: three doses at ages 6, 7·5, and 9 months (three-dose schedule, 90% coverage) and including a fourth dose at age 27 months (four-dose schedule, 72% coverage). We estimated cost-effectiveness in the presence of existing malaria interventions for vaccine prices of US$2-10 per dose. In regions with a PfPR2-10 of 10-65%, RTS,S/AS01 is predicted to avert a median of 93,940 (range 20,490-126,540) clinical cases and 394 (127-708) deaths for the three-dose schedule, or 116,480 (31,450-160,410) clinical cases and 484 (189-859) deaths for the four-dose schedule, per 100,000 fully vaccinated children. A positive impact is also predicted at a PfPR2-10 of 5-10%, but there is little impact at a prevalence of lower than 3%. At $5 per dose and a PfPR2-10 of 10-65%, we estimated a median incremental cost-effectiveness ratio compared with current interventions of $30 (range 18-211) per clinical case averted and $80 (44-279) per DALY averted for the three-dose schedule, and of $25 (16-222) and $87 (48
Electrostatic ion thrusters - towards predictive modeling
Energy Technology Data Exchange (ETDEWEB)
Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)
2014-02-15
The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Characterizing Attention with Predictive Network Models.
Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M
2017-04-01
Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.
Prediction models : the right tool for the right problem
Kappen, Teus H.; Peelen, Linda M.
2016-01-01
PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to
Plant water potential improves prediction of empirical stomatal models.
Directory of Open Access Journals (Sweden)
William R L Anderegg
Full Text Available Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.
Ivankina, T. I.; Zel, I. Yu.; Lokajicek, T.; Kern, H.; Lobanov, K. V.; Zharikov, A. V.
2017-08-01
In this paper we present experimental and theoretical studies on a highly anisotropic layered rock sample characterized by alternating layers of biotite and muscovite (retrogressed from sillimanite) and plagioclase and quartz, respectively. We applied two different experimental methods to determine seismic anisotropy at pressures up to 400 MPa: (1) measurement of P- and S-wave phase velocities on a cube in three foliation-related orthogonal directions and (2) measurement of P-wave group velocities on a sphere in 132 directions The combination of the spatial distribution of P-wave velocities on the sphere (converted to phase velocities) with S-wave velocities of three orthogonal structural directions on the cube made it possible to calculate the bulk elastic moduli of the anisotropic rock sample. On the basis of the crystallographic preferred orientations (CPOs) of major minerals obtained by time-of-flight neutron diffraction, effective media modeling was performed using different inclusion methods and averaging procedures. The implementation of a nonlinear approximation of the P-wave velocity-pressure relation was applied to estimate the mineral matrix properties and the orientation distribution of microcracks. Comparison of theoretical calculations of elastic properties of the mineral matrix with those derived from the nonlinear approximation showed discrepancies in elastic moduli and P-wave velocities of about 10%. The observed discrepancies between the effective media modeling and ultrasonic velocity data are a consequence of the inhomogeneous structure of the sample and inability to perform long-wave approximation. Furthermore, small differences between elastic moduli predicted by the different theoretical models, including specific fabric characteristics such as crystallographic texture, grain shape and layering were observed. It is shown that the bulk elastic anisotropy of the sample is basically controlled by the CPO of biotite and muscovite and their volume
Neuro-fuzzy modeling in bankruptcy prediction
Directory of Open Access Journals (Sweden)
Vlachos D.
2003-01-01
Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.
The predictive performance and stability of six species distribution models.
Duan, Ren-Yan; Kong, Xiao-Quan; Huang, Min-Yi; Fan, Wei-Yi; Wang, Zhi-Gao
2014-01-01
Predicting species' potential geographical range by species distribution models (SDMs) is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs. We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis) and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials). We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values. The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (pSDMs (MAHAL, RF, MAXENT, and SVM) had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points). According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.
SHMF: Interest Prediction Model with Social Hub Matrix Factorization
Directory of Open Access Journals (Sweden)
Chaoyuan Cui
2017-01-01
Full Text Available With the development of social networks, microblog has become the major social communication tool. There is a lot of valuable information such as personal preference, public opinion, and marketing in microblog. Consequently, research on user interest prediction in microblog has a positive practical significance. In fact, how to extract information associated with user interest orientation from the constantly updated blog posts is not so easy. Existing prediction approaches based on probabilistic factor analysis use blog posts published by user to predict user interest. However, these methods are not very effective for the users who post less but browse more. In this paper, we propose a new prediction model, which is called SHMF, using social hub matrix factorization. SHMF constructs the interest prediction model by combining the information of blogs posts published by both user and direct neighbors in user’s social hub. Our proposed model predicts user interest by integrating user’s historical behavior and temporal factor as well as user’s friendships, thus achieving accurate forecasts of user’s future interests. The experimental results on Sina Weibo show the efficiency and effectiveness of our proposed model.
Wagner, David W; Reed, Matthew P; Chaffin, Don B
2010-11-01
Accurate prediction of foot placements in relation to hand locations during manual materials handling tasks is critical for prospective biomechanical analysis. To address this need, the effects of lifting task conditions and anthropometric variables on foot placements were studied in a laboratory experiment. In total, 20 men and women performed two-handed object transfers that required them to walk to a shelf, lift an object from the shelf at waist height and carry the object to a variety of locations. Five different changes in the direction of progression following the object pickup were used, ranging from 45° to 180° relative to the approach direction. Object weights of 1.0 kg, 4.5 kg, 13.6 kg were used. Whole-body motions were recorded using a 3-D optical retro-reflective marker-based camera system. A new parametric system for describing foot placements, the Quantitative Transition Classification System, was developed to facilitate the parameterisation of foot placement data. Foot placements chosen by the subjects during the transfer tasks appeared to facilitate a change in the whole-body direction of progression, in addition to aiding in performing the lift. Further analysis revealed that five different stepping behaviours accounted for 71% of the stepping patterns observed. More specifically, the most frequently observed behaviour revealed that the orientation of the lead foot during the actual lifting task was primarily affected by the amount of turn angle required after the lift (R(2) = 0.53). One surprising result was that the object mass (scaled by participant body mass) was not found to significantly affect any of the individual step placement parameters. Regression models were developed to predict the most prevalent step placements and are included in this paper to facilitate more accurate human motion simulations and ergonomics analyses of manual material lifting tasks. STATEMENT OF RELEVANCE: This study proposes a method for parameterising the steps
McNellis, B.; Hudiburg, T. W.
2017-12-01
Tree mortality due to drought is predicted to have increasing impacts on ecosystem structure and function during the 21st century. Models can attempt to predict which forests are most at risk from drought, but novel environments may preclude analysis that relies on past observations. The inclusion of more mechanistic detail may reduce uncertainty in predictions, but can also compound model complexity, especially in global models. The Community Land Model version 5 (CLM5), itself a component of the Community Earth System Model (CESM), has recently integrated cohort-based demography into its dynamic vegetation component and is in the process of coupling this demography to a model of plant hydraulic physiology (FATES-Hydro). Previous treatment of drought stress and plant mortality within CLM has been relatively broad, but a detailed hydraulics module represents a key step towards accurate mortality prognosis. Here, we examine the structure of FATES-Hydro with respect to two key physiological attributes: tissue osmotic potentials and embolism refilling. Specifically, we ask how FATES-Hydro captures mechanistic realism within each attribute and how much support there is within the physiological literature for its further elaboration within the model structure. Additionally, connections to broader aspects of carbon metabolism within FATES are explored to better resolve emergent consequences of drought stress on ecosystem function and tree demographics. An on-going field experiment in managed stands of Pinus ponderosa and mixed conifers is assessed for model parameterization and performance across PNW forests, with important implications for future forest management strategy.
Predictive Models for Carcinogenicity and Mutagenicity ...
Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t
Dedes, I.; Dudek, J.
2018-03-01
We examine the effects of the parametric correlations on the predictive capacities of the theoretical modelling keeping in mind the nuclear structure applications. The main purpose of this work is to illustrate the method of establishing the presence and determining the form of parametric correlations within a model as well as an algorithm of elimination by substitution (see text) of parametric correlations. We examine the effects of the elimination of the parametric correlations on the stabilisation of the model predictions further and further away from the fitting zone. It follows that the choice of the physics case and the selection of the associated model are of secondary importance in this case. Under these circumstances we give priority to the relative simplicity of the underlying mathematical algorithm, provided the model is realistic. Following such criteria, we focus specifically on an important but relatively simple case of doubly magic spherical nuclei. To profit from the algorithmic simplicity we chose working with the phenomenological spherically symmetric Woods–Saxon mean-field. We employ two variants of the underlying Hamiltonian, the traditional one involving both the central and the spin orbit potential in the Woods–Saxon form and the more advanced version with the self-consistent density-dependent spin–orbit interaction. We compare the effects of eliminating of various types of correlations and discuss the improvement of the quality of predictions (‘predictive power’) under realistic parameter adjustment conditions.
Disease Prediction Models and Operational Readiness
Energy Technology Data Exchange (ETDEWEB)
Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.
2014-03-19
INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the
Nonlinear model predictive control theory and algorithms
Grüne, Lars
2017-01-01
This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...
Stochastic models for predicting pitting corrosion damage of HLRW containers
International Nuclear Information System (INIS)
Henshall, G.A.
1991-10-01
Stochastic models for predicting aqueous pitting corrosion damage of high-level radioactive-waste containers are described. These models could be used to predict the time required for the first pit to penetrate a container and the increase in the number of breaches at later times, both of which would be useful in the repository system performance analysis. Monte Carlo implementations of the stochastic models are described, and predictions of induction time, survival probability and pit depth distributions are presented. These results suggest that the pit nucleation probability decreases with exposure time and that pit growth may be a stochastic process. The advantages and disadvantages of the stochastic approach, methods for modeling the effects of environment, and plans for future work are discussed
A predictive model for dimensional errors in fused deposition modeling
DEFF Research Database (Denmark)
Stolfi, A.
2015-01-01
values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....
Predictive Modeling in Actinide Chemistry and Catalysis
Energy Technology Data Exchange (ETDEWEB)
Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-16
These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.
Predictive modelling of evidence informed teaching
Zhang, Dell; Brown, C.
2017-01-01
In this paper, we analyse the questionnaire survey data collected from 79 English primary schools about the situation of evidence informed teaching, where the evidences could come from research journals or conferences. Specifically, we build a predictive model to see what external factors could help to close the gap between teachers’ belief and behaviour in evidence informed teaching, which is the first of its kind to our knowledge. The major challenge, from the data mining perspective, is th...
A Predictive Model for Cognitive Radio
2006-09-14
response in a given situation. Vadde et al. interest and produce a model for prediction of the response. have applied response surface methodology and...34 2000. [3] K. K. Vadde and V. R. Syrotiuk, "Factor interaction on service configurations to those that best meet our communication delivery in mobile ad...resulting set of configurations randomly or apply additional 2004. screening criteria. [4] K. K. Vadde , M.-V. R. Syrotiuk, and D. C. Montgomery
Tectonic predictions with mantle convection models
Coltice, Nicolas; Shephard, Grace E.
2018-04-01
Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough
International Nuclear Information System (INIS)
Al Mazouzi, A.; Alamo, A.; Lidbury, D.; Moinereau, D.; Van Dyck, S.
2011-01-01
Highlights: → Multi-scale and multi-physics modelling are adopted by PERFORM 60 to predict irradiation damage in nuclear structural materials. → PERFORM 60 allows to Consolidate the community and improve the interaction between universities/industries and safety authorities. → Experimental validation at the relevant scale is a key for developing the multi-scale modelling methodology. - Abstract: In nuclear power plants, materials undergo degradation due to severe irradiation conditions that may limit their operational lifetime. Utilities that operate these reactors need to quantify the ageing and potential degradation of certain essential structures of the power plant to ensure their safe and reliable operation. So far, the monitoring and mitigation of these degradation phenomena rely mainly on long-term irradiation programs in test reactors as well as on mechanical or corrosion testing in specialized hot cells. Continuous progress in the physical understanding of the phenomena involved in irradiation damage and progress in computer sciences have now made possible the development of multi-scale numerical tools able to simulate the materials behaviour in a nuclear environment. Indeed, within the PERFECT project of the EURATOM framework program (FP6), a first step has been successfully reached through the development of a simulation platform that contains several advanced numerical tools aiming at the prediction of irradiation damage in both the reactor pressure vessel (RPV) and its internals using available, state-of-the-art-knowledge. These tools allow simulation of irradiation effects on the nanostructure and the constitutive behaviour of the RPV low alloy steels, as well as their fracture mechanics properties. For the more highly irradiated reactor internals, which are commonly produced using austenitic stainless steels, the first partial models were established, describing radiation effects on the nanostructure and providing a first description of the
Prediction error, ketamine and psychosis: An updated model.
Corlett, Philip R; Honey, Garry D; Fletcher, Paul C
2016-11-01
In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.
Predictive Modeling of the CDRA 4BMS
Coker, Robert F.; Knox, James C.
2016-01-01
As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.
Park, Shin Young; Ha, Sang-Do
2018-02-01
In this study, a predictive growth model of generic Escherichia coli in Garaetteok at a range of storage temperatures (T, 10-40 °C) was developed. The primary models of specific growth rate (SGR) and lag time (LT) fit well (R 2 ≥ 0.985) using a Gompertz equation. Secondary polynomial models were obtained by non-linear regression and calculated as SGR = - 0.01,570 + 0.0183T + 0.000008T 2 ; LT = 43.2064 - 2.4824T + 0.0355T 2 . The appropriateness of the secondary models was verified by mean square error (MSE; 0.0006 for SGR, 0.282 for LT), bias factor (B f ; 0.948 for SGR, 0.942 for LT), accuracy factor (A f ; 1.163 for SGR, 1.355 for LT), and coefficient of determination (r 2 ; 0.986 for SGR, 0.996 for LT), and these models were found to be in good agreement with the experimental values used for validation. The secondary models developed in this study may thus be used as practical prediction models for generic E. coli growth in Garaetteok . These newly developed secondary models of SGR and LT for generic E. coli in Garaetteok may thus be incorporated into tertiary modeling programs such as the Korea Pathogen Modeling Program, in which they can easily be used to predict the growth kinetics of E. coli as a function of storage temperature. Ultimately, model developed in this study may be a vital tool for the reduction of E. coli levels in food production, processing, and distribution processes, which in turn will lead to enhanced safety of rice products.
Huang, Yanqi; He, Lan; Dong, Di; Yang, Caiyun; Liang, Cuishan; Chen, Xin; Ma, Zelan; Huang, Xiaomei; Yao, Su; Liang, Changhong; Tian, Jie; Liu, Zaiyi
2018-02-01
To develop and validate a radiomics prediction model for individualized prediction of perineural invasion (PNI) in colorectal cancer (CRC). After computed tomography (CT) radiomics features extraction, a radiomics signature was constructed in derivation cohort (346 CRC patients). A prediction model was developed to integrate the radiomics signature and clinical candidate predictors [age, sex, tumor location, and carcinoembryonic antigen (CEA) level]. Apparent prediction performance was assessed. After internal validation, independent temporal validation (separate from the cohort used to build the model) was then conducted in 217 CRC patients. The final model was converted to an easy-to-use nomogram. The developed radiomics nomogram that integrated the radiomics signature and CEA level showed good calibration and discrimination performance [Harrell's concordance index (c-index): 0.817; 95% confidence interval (95% CI): 0.811-0.823]. Application of the nomogram in validation cohort gave a comparable calibration and discrimination (c-index: 0.803; 95% CI: 0.794-0.812). Integrating the radiomics signature and CEA level into a radiomics prediction model enables easy and effective risk assessment of PNI in CRC. This stratification of patients according to their PNI status may provide a basis for individualized auxiliary treatment.
Rutten, M.J.M.; Bovenhuis, H.; Arendonk, van J.A.M.
2010-01-01
Fourier transform infrared spectroscopy is a suitable method to determine bovine milk fat composition. However, the determination of fat composition by gas chromatography, required for calibration of the infrared prediction model, is expensive and labor intensive. It has recently been shown that the
Austin, Peter C; Steyerberg, Ewout W
2013-02-20
The change in c-statistic is frequently used to summarize the change in predictive accuracy when a novel risk factor is added to an existing logistic regression model. We explored the relationship between the absolute change in the c-statistic, Brier score, generalized R(2) , and the discrimination slope when a risk factor was added to an existing model in an extensive set of Monte Carlo simulations. The increase in model accuracy due to the inclusion of a novel marker was proportional to both the prevalence of the marker and to the odds ratio relating the marker to the outcome but inversely proportional to the accuracy of the logistic regression model with the marker omitted. We observed greater improvements in model accuracy when the novel risk factor or marker was uncorrelated with the existing predictor variable compared with when the risk factor has a positive correlation with the existing predictor variable. We illustrated these findings by using a study on mortality prediction in patients hospitalized with heart failure. In conclusion, the increase in predictive accuracy by adding a marker should be considered in the context of the accuracy of the initial model. Copyright © 2012 John Wiley & Sons, Ltd.
Cardiopulmonary Circuit Models for Predicting Injury to the Heart
Ward, Richard; Wing, Sarah; Bassingthwaighte, James; Neal, Maxwell
2004-11-01
Circuit models have been used extensively in physiology to describe cardiopulmonary function. Such models are being used in the DARPA Virtual Soldier (VS) Project* to predict the response to injury or physiological stress. The most complex model consists of systemic circulation, pulmonary circulation, and a four-chamber heart sub-model. This model also includes baroreceptor feedback, airway mechanics, gas exchange, and pleural pressure influence on the circulation. As part of the VS Project, Oak Ridge National Laboratory has been evaluating various cardiopulmonary circuit models for predicting the effects of injury to the heart. We describe, from a physicist's perspective, the concept of building circuit models, discuss both unstressed and stressed models, and show how the stressed models are used to predict effects of specific wounds. *This work was supported by a grant from the DARPA, executed by the U.S. Army Medical Research and Materiel Command/TATRC Cooperative Agreement, Contract # W81XWH-04-2-0012. The submitted manuscript has been authored by the U.S. Department of Energy, Office of Science of the Oak Ridge National Laboratory, managed for the U.S. DOE by UT-Battelle, LLC, under contract No. DE-AC05-00OR22725. Accordingly, the U.S. Government retains a non-exclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purpose.
Adaptive Gaussian Predictive Process Models for Large Spatial Datasets
Guhaniyogi, Rajarshi; Finley, Andrew O.; Banerjee, Sudipto; Gelfand, Alan E.
2011-01-01
Large point referenced datasets occur frequently in the environmental and natural sciences. Use of Bayesian hierarchical spatial models for analyzing these datasets is undermined by onerous computational burdens associated with parameter estimation. Low-rank spatial process models attempt to resolve this problem by projecting spatial effects to a lower-dimensional subspace. This subspace is determined by a judicious choice of “knots” or locations that are fixed a priori. One such representation yields a class of predictive process models (e.g., Banerjee et al., 2008) for spatial and spatial-temporal data. Our contribution here expands upon predictive process models with fixed knots to models that accommodate stochastic modeling of the knots. We view the knots as emerging from a point pattern and investigate how such adaptive specifications can yield more flexible hierarchical frameworks that lead to automated knot selection and substantial computational benefits. PMID:22298952
Predictive Modeling by the Cerebellum Improves Proprioception
Bhanpuri, Nasir H.; Okamura, Allison M.
2013-01-01
Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance. PMID:24005283
Chen, Yang; Zhao, Kaijing; Liu, Fei; Li, Ying; Zhong, Zeyu; Hong, Shijin; Liu, Xiaodong; Liu, Li
2018-04-04
Anti-tumor evaluation in tumor-bearing mouse is time- and energy-consuming. We aimed to investigate whether in vivo anti-tumor efficacy could be predicted based on in vitro pharmacodynamics using deoxypodophyllotoxin (DPT), a developing anti-tumor candidate, as a model compound. Proliferation kinetics of monolayer cultivated NCI-H460 cells under various DPT concentrations was quantitatively investigated accompanied by calibration curves. Koch's two-phase natural growth model combined with sigmoid Emax model, i.e. dM/dt=2λ 0 λ 1 M/(λ 1 +2λ 0 M)-EmaxC γ /(EC 50 γ +C γ )·M, was introduced to describe cell proliferation (M) against time under DPT treatment (C). Estimated in vitro pharmacodynamic parameters were: EC 50 , 8.97 nM; Emax, 0.820 day -1 and γ, 7.13. A physiologically based pharmacokinetic (PBPK) model including tumor compartment was introduced, which could predict DPT disposition in plasma, tumor tissue and main normal tissues of NCI-H460 tumor-bearing mice following single dose. In vivo pharmacodynamic model and parameters were assumed the same as in vitro ones, and linked with simulated tumor pharmacokinetic profiles by PBPK model, to build a physiologically based pharmacokinetic-pharmacodynamic (PBPK-PD) model. After estimating natural growth parameters (λ 0 and λ 1 ), we desirably predicted the tumor growth in NCI-H460 tumor-bearing mice during multi-dose DPT treatment, both in this study and literature, by the PBPK-PD model. The model was further successfully applied to predict tumor growth in SGC-7901 tumor-bearing mice. These data indicated that in vivo anti-tumor efficacy might be predicted based on in vitro cytotoxic assays via PBPK-PD model approach. The approach was demonstrated reasonable and applicable, which might facilitate and accelerate anti-cancer candidate screening and dose regimen design in drug discovery process. The American Society for Pharmacology and Experimental Therapeutics.
Predicted and measured velocity distribution in a model heat exchanger
International Nuclear Information System (INIS)
Rhodes, D.B.; Carlucci, L.N.
1984-01-01
This paper presents a comparison between numerical predictions, using the porous media concept, and measurements of the two-dimensional isothermal shell-side velocity distributions in a model heat exchanger. Computations and measurements were done with and without tubes present in the model. The effect of tube-to-baffle leakage was also investigated. The comparison was made to validate certain porous media concepts used in a computer code being developed to predict the detailed shell-side flow in a wide range of shell-and-tube heat exchanger geometries
Prediction of Chemical Function: Model Development and ...
The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi
Gamma-Ray Pulsars Models and Predictions
Harding, A K
2001-01-01
Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...
A prediction model for Clostridium difficile recurrence
Directory of Open Access Journals (Sweden)
Francis D. LaBarbera
2015-02-01
Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.
Artificial Neural Network Model for Predicting Compressive
Directory of Open Access Journals (Sweden)
Salim T. Yousif
2013-05-01
Full Text Available Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature. The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor affecting the output of the model. The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.
Evaluating predictive models of software quality
International Nuclear Information System (INIS)
Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D
2014-01-01
Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.
A generative model for predicting terrorist incidents
Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger
2017-05-01
A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations
PREDICTION MODELS OF GRAIN YIELD AND CHARACTERIZATION
Directory of Open Access Journals (Sweden)
Narciso Ysac Avila Serrano
2009-06-01
Full Text Available With the objective to characterize the grain yield of five cowpea cultivars and to find linear regression models to predict it, a study was developed in La Paz, Baja California Sur, Mexico. A complete randomized blocks design was used. Simple and multivariate analyses of variance were carried out using the canonical variables to characterize the cultivars. The variables cluster per plant, pods per plant, pods per cluster, seeds weight per plant, seeds hectoliter weight, 100-seed weight, seeds length, seeds wide, seeds thickness, pods length, pods wide, pods weight, seeds per pods, and seeds weight per pods, showed significant differences (Pâ‰¤ 0.05 among cultivars. PaceÃ±o and IT90K-277-2 cultivars showed the higher seeds weight per plant. The linear regression models showed correlation coefficients â‰¥0.92. In these models, the seeds weight per plant, pods per cluster, pods per plant, cluster per plant and pods length showed significant correlations (Pâ‰¤ 0.05. In conclusion, the results showed that grain yield differ among cultivars and for its estimation, the prediction models showed determination coefficients highly dependable.
Aggarwal, Yogender; Karan, Bhuwan Mohan; Das, Barda Nand; Sinha, Rakesh Kumar
2010-05-01
The present work is concerned to model the molecular signalling pathway for vasodilation and to predict the resting young human forearm blood flow under heat stress. The mechanistic electronic modelling technique has been designed and implemented using MULTISIM 8.0 and an assumption of 1V/ degrees C for prediction of forearm blood flow and the digital logic has been used to design the molecular signalling pathway for vasodilation. The minimum forearm blood flow has been observed at 35 degrees C (0 ml 100 ml(-1)min(-1)) and the maximum at 42 degrees C (18.7 ml 100 ml(-1)min(-1)) environmental temperature with respect to the base value of 2 ml 100 ml(-1)min(-1). This model may also enable to identify many therapeutic targets that can be used in the treatment of inflammations and disorders due to heat-related illnesses. 2010 Elsevier Ltd. All rights reserved.
Predictions of titanium alloy properties using thermodynamic modeling tools
Zhang, F.; Xie, F.-Y.; Chen, S.-L.; Chang, Y. A.; Furrer, D.; Venkatesh, V.
2005-12-01
Thermodynamic modeling tools have become essential in understanding the effect of alloy chemistry on the final microstructure of a material. Implementation of such tools to improve titanium processing via parameter optimization has resulted in significant cost savings through the elimination of shop/laboratory trials and tests. In this study, a thermodynamic modeling tool developed at CompuTherm, LLC, is being used to predict β transus, phase proportions, phase chemistries, partitioning coefficients, and phase boundaries of multicomponent titanium alloys. This modeling tool includes Pandat, software for multicomponent phase equilibrium calculations, and PanTitanium, a thermodynamic database for titanium alloys. Model predictions are compared with experimental results for one α-β alloy (Ti-64) and two near-β alloys (Ti-17 and Ti-10-2-3). The alloying elements, especially the interstitial elements O, N, H, and C, have been shown to have a significant effect on the β transus temperature, and are discussed in more detail herein.
Predictive Models for Normal Fetal Cardiac Structures.
Krishnan, Anita; Pike, Jodi I; McCarter, Robert; Fulgium, Amanda L; Wilson, Emmanuel; Donofrio, Mary T; Sable, Craig A
2016-12-01
Clinicians rely on age- and size-specific measures of cardiac structures to diagnose cardiac disease. No universally accepted normative data exist for fetal cardiac structures, and most fetal cardiac centers do not use the same standards. The aim of this study was to derive predictive models for Z scores for 13 commonly evaluated fetal cardiac structures using a large heterogeneous population of fetuses without structural cardiac defects. The study used archived normal fetal echocardiograms in representative fetuses aged 12 to 39 weeks. Thirteen cardiac dimensions were remeasured by a blinded echocardiographer from digitally stored clips. Studies with inadequate imaging views were excluded. Regression models were developed to relate each dimension to estimated gestational age (EGA) by dates, biparietal diameter, femur length, and estimated fetal weight by the Hadlock formula. Dimension outcomes were transformed (e.g., using the logarithm or square root) as necessary to meet the normality assumption. Higher order terms, quadratic or cubic, were added as needed to improve model fit. Information criteria and adjusted R 2 values were used to guide final model selection. Each Z-score equation is based on measurements derived from 296 to 414 unique fetuses. EGA yielded the best predictive model for the majority of dimensions; adjusted R 2 values ranged from 0.72 to 0.893. However, each of the other highly correlated (r > 0.94) biometric parameters was an acceptable surrogate for EGA. In most cases, the best fitting model included squared and cubic terms to introduce curvilinearity. For each dimension, models based on EGA provided the best fit for determining normal measurements of fetal cardiac structures. Nevertheless, other biometric parameters, including femur length, biparietal diameter, and estimated fetal weight provided results that were nearly as good. Comprehensive Z-score results are available on the basis of highly predictive models derived from gestational
Model Predictive Control of Wind Turbines using Uncertain LIDAR Measurements
DEFF Research Database (Denmark)
Mirzaei, Mahmood; Soltani, Mohsen; Poulsen, Niels Kjølstad
2013-01-01
The problem of Model predictive control (MPC) of wind turbines using uncertain LIDAR (LIght Detection And Ranging) measurements is considered. A nonlinear dynamical model of the wind turbine is obtained. We linearize the obtained nonlinear model for different operating points, which are determined......, we simplify state prediction for the MPC. Consequently, the control problem of the nonlinear system is simplified into a quadratic programming. We consider uncertainty in the wind propagation time, which is the traveling time of wind from the LIDAR measurement point to the rotor. An algorithm based...... by the effective wind speed on the rotor disc. We take the wind speed as a scheduling variable. The wind speed is measurable ahead of the turbine using LIDARs, therefore, the scheduling variable is known for the entire prediction horizon. By taking the advantage of having future values of the scheduling variable...
Ben Yaghlene, H; Leguerinel, I; Hamdi, M; Mafart, P
2009-07-31
In this study, predictive microbiology and food engineering were combined in order to develop a new analytical model predicting the bacterial growth under dynamic temperature conditions. The proposed model associates a simplified primary bacterial growth model without lag, the secondary Ratkowsky "square root" model and a simplified two-parameter heat transfer model regarding an infinite slab. The model takes into consideration the product thickness, its thermal properties, the ambient air temperature, the convective heat transfer coefficient and the growth parameters of the micro organism of concern. For the validation of the overall model, five different combinations of ambient air temperature (ranging from 8 degrees C to 12 degrees C), product thickness (ranging from 1 cm to 6 cm) and convective heat transfer coefficient (ranging from 8 W/(m(2) K) to 60 W/(m(2) K)) were tested during a cooling procedure. Moreover, three different ambient air temperature scenarios assuming alternated cooling and heating stages, drawn from real refrigerated food processes, were tested. General agreement between predicted and observed bacterial growth was obtained and less than 5% of the experimental data fell outside the 95% confidence bands estimated by the bootstrap percentile method, at all the tested conditions. Accordingly, the overall model was successfully validated for isothermal and dynamic refrigeration cycles allowing for temperature dynamic changes at the centre and at the surface of the product. The major impact of the convective heat transfer coefficient and the product thickness on bacterial growth during the product cooling was demonstrated. For instance, the time needed for the same level of bacterial growth to be reached at the product's half thickness was estimated to be 5 and 16.5 h at low and high convection level, respectively. Moreover, simulation results demonstrated that the predicted bacterial growth at the air ambient temperature cannot be assumed to be
A simplified building airflow model for agent concentration prediction.
Jacques, David R; Smith, David A
2010-11-01
A simplified building airflow model is presented that can be used to predict the spread of a contaminant agent from a chemical or biological attack. If the dominant means of agent transport throughout the building is an air-handling system operating at steady-state, a linear time-invariant (LTI) model can be constructed to predict the concentration in any room of the building as a result of either an internal or external release. While the model does not capture weather-driven and other temperature-driven effects, it is suitable for concentration predictions under average daily conditions. The model is easily constructed using information that should be accessible to a building manager, supplemented with assumptions based on building codes and standard air-handling system design practices. The results of the model are compared with a popular multi-zone model for a simple building and are demonstrated for building examples containing one or more air-handling systems. The model can be used for rapid concentration prediction to support low-cost placement strategies for chemical and biological detection sensors.
An analytical model for climatic predictions
International Nuclear Information System (INIS)
Njau, E.C.
1990-12-01
A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs
A novel Bayesian hierarchical model for road safety hotspot prediction.
Fawcett, Lee; Thorpe, Neil; Matthews, Joseph; Kremer, Karsten
2017-02-01
In this paper, we propose a Bayesian hierarchical model for predicting accident counts in future years at sites within a pool of potential road safety hotspots. The aim is to inform road safety practitioners of the location of likely future hotspots to enable a proactive, rather than reactive, approach to road safety scheme implementation. A feature of our model is the ability to rank sites according to their potential to exceed, in some future time period, a threshold accident count which may be used as a criterion for scheme implementation. Our model specification enables the classical empirical Bayes formulation - commonly used in before-and-after studies, wherein accident counts from a single before period are used to estimate counterfactual counts in the after period - to be extended to incorporate counts from multiple time periods. This allows site-specific variations in historical accident counts (e.g. locally-observed trends) to offset estimates of safety generated by a global accident prediction model (APM), which itself is used to help account for the effects of global trend and regression-to-mean (RTM). The Bayesian posterior predictive distribution is exploited to formulate predictions and to properly quantify our uncertainty in these predictions. The main contributions of our model include (i) the ability to allow accident counts from multiple time-points to inform predictions, with counts in more recent years lending more weight to predictions than counts from time-points further in the past; (ii) where appropriate, the ability to offset global estimates of trend by variations in accident counts observed locally, at a site-specific level; and (iii) the ability to account for unknown/unobserved site-specific factors which may affect accident counts. We illustrate our model with an application to accident counts at 734 potential hotspots in the German city of Halle; we also propose some simple diagnostics to validate the predictive capability of our
Energy Technology Data Exchange (ETDEWEB)
Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu [University of Michigan, Department of Astronomy, 1085 South University Avenue, Ann Arbor, MI 48109-1107 (United States)
2016-11-20
Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.
Modelling personality, plasticity and predictability in shelter dogs
2017-01-01
Behavioural assessments of shelter dogs (Canis lupus familiaris) typically comprise standardized test batteries conducted at one time point, but test batteries have shown inconsistent predictive validity. Longitudinal behavioural assessments offer an alternative. We modelled longitudinal observational data on shelter dog behaviour using the framework of behavioural reaction norms, partitioning variance into personality (i.e. inter-individual differences in behaviour), plasticity (i.e. inter-individual differences in average behaviour) and predictability (i.e. individual differences in residual intra-individual variation). We analysed data on interactions of 3263 dogs (n = 19 281) with unfamiliar people during their first month after arrival at the shelter. Accounting for personality, plasticity (linear and quadratic trends) and predictability improved the predictive accuracy of the analyses compared to models quantifying personality and/or plasticity only. While dogs were, on average, highly sociable with unfamiliar people and sociability increased over days since arrival, group averages were unrepresentative of all dogs and predictions made at the individual level entailed considerable uncertainty. Effects of demographic variables (e.g. age) on personality, plasticity and predictability were observed. Behavioural repeatability was higher one week after arrival compared to arrival day. Our results highlight the value of longitudinal assessments on shelter dogs and identify measures that could improve the predictive validity of behavioural assessments in shelters. PMID:28989764
Directory of Open Access Journals (Sweden)
TM Shafey
2015-12-01
Full Text Available ABSTRACT The relationships between egg measurements [egg weight (EGWT, egg width (EGWD, egg shape index (EGSI, egg volume (EGV and egg density (EGD], and egg components [eggshell (SWT, yolk (YWT and albumen (AWT] were investigated in laying hens with 32, 45, and 59 weeks of age with an objective of managing multicollinearity (MC, using stepwise regression (SR and ridge regression (RR analyses. There were significant correlations among egg traits that led to MC problems in all eggs. Hen age influenced egg characteristics and the magnitude of the correlations among egg characteristics. Eggs produced at older age had significantly (p<0.01 higher EGWT, EGWD, EGV, YWT and AWT than those produced at younger age. The SR model alleviated MC problem in eggs produced at 32 weeks, with condition index greater than 30, and one predictor, EGWT had a model fit predicted egg components with R2 ranged from 60 to 99%. The SR model of eggs produced at 45 and 59 weeks indicated MC problem with variance inflation factors (VIF values greater than 10, and 4 predictors; EGWT, EGWD, EGV and EGD had a model fit that significantly predicted egg components with R2 % ranged from 76 to 99 %. The RR analysis provided lower VIF values than 10 and eliminated the MC problem for eggs produced at any age group. It is concluded that the RR analysis provided an ideal solution for managing the MC problem and successfully predicting egg components of laying hens from egg measurements.
Web tools for predictive toxicology model building.
Jeliazkova, Nina
2012-07-01
The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.
[Endometrial cancer: Predictive models and clinical impact].
Bendifallah, Sofiane; Ballester, Marcos; Daraï, Emile
2017-12-01
In France, in 2015, endometrial cancer (CE) is the first gynecological cancer in terms of incidence and the fourth cause of cancer of the woman. About 8151 new cases and nearly 2179 deaths have been reported. Treatments (surgery, external radiotherapy, brachytherapy and chemotherapy) are currently delivered on the basis of an estimation of the recurrence risk, an estimation of lymph node metastasis or an estimate of survival probability. This risk is determined on the basis of prognostic factors (clinical, histological, imaging, biological) taken alone or grouped together in the form of classification systems, which are currently insufficient to account for the evolutionary and prognostic heterogeneity of endometrial cancer. For endometrial cancer, the concept of mathematical modeling and its application to prediction have developed in recent years. These biomathematical tools have opened a new era of care oriented towards the promotion of targeted therapies and personalized treatments. Many predictive models have been published to estimate the risk of recurrence and lymph node metastasis, but a tiny fraction of them is sufficiently relevant and of clinical utility. The optimization tracks are multiple and varied, suggesting the possibility in the near future of a place for these mathematical models. The development of high-throughput genomics is likely to offer a more detailed molecular characterization of the disease and its heterogeneity. Copyright © 2017 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.
Predictive Capability Maturity Model for computational modeling and simulation.
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.
2007-10-01
The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.
Predictions of models for environmental radiological assessment
International Nuclear Information System (INIS)
Peres, Sueli da Silva; Lauria, Dejanira da Costa; Mahler, Claudio Fernando
2011-01-01
In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for 137 Cs and 60 Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)
Phytoadaptation in Desert Soil Prediction Using Fuzzy Logic Modeling
S. Bouharati; F. Allag; M. Belmahdi; M. Bounechada
2014-01-01
In terms of ecology forecast effects of desertification, the purpose of this study is to develop a predictive model of growth and adaptation of species in arid environment and bioclimatic conditions. The impact of climate change and the desertification phenomena is the result of combined effects in magnitude and frequency of these phenomena. Like the data involved in the phytopathogenic process and bacteria growth in arid soil occur in an uncertain environment because of their complexity, it ...
Genomic value prediction for quantitative traits under the epistatic model
Directory of Open Access Journals (Sweden)
Xu Shizhong
2011-01-01
Full Text Available Abstract Background Most quantitative traits are controlled by multiple quantitative trait loci (QTL. The contribution of each locus may be negligible but the collective contribution of all loci is usually significant. Genome selection that uses markers of the entire genome to predict the genomic values of individual plants or animals can be more efficient than selection on phenotypic values and pedigree information alone for genetic improvement. When a quantitative trait is contributed by epistatic effects, using all markers (main effects and marker pairs (epistatic effects to predict the genomic values of plants can achieve the maximum efficiency for genetic improvement. Results In this study, we created 126 recombinant inbred lines of soybean and genotyped 80 makers across the genome. We applied the genome selection technique to predict the genomic value of somatic embryo number (a quantitative trait for each line. Cross validation analysis showed that the squared correlation coefficient between the observed and predicted embryo numbers was 0.33 when only main (additive effects were used for prediction. When the interaction (epistatic effects were also included in the model, the squared correlation coefficient reached 0.78. Conclusions This study provided an excellent example for the application of genome selection to plant breeding.
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and
Mathematical models for indoor radon prediction
International Nuclear Information System (INIS)
Malanca, A.; Pessina, V.; Dallara, G.
1995-01-01
It is known that the indoor radon (Rn) concentration can be predicted by means of mathematical models. The simplest model relies on two variables only: the Rn source strength and the air exchange rate. In the Lawrence Berkeley Laboratory (LBL) model several environmental parameters are combined into a complex equation; besides, a correlation between the ventilation rate and the Rn entry rate from the soil is admitted. The measurements were carried out using activated carbon canisters. Seventy-five measurements of Rn concentrations were made inside two rooms placed on the second floor of a building block. One of the rooms had a single-glazed window whereas the other room had a double pane window. During three different experimental protocols, the mean Rn concentration was always higher into the room with a double-glazed window. That behavior can be accounted for by the simplest model. A further set of 450 Rn measurements was collected inside a ground-floor room with a grounding well in it. This trend maybe accounted for by the LBL model
A Predictive Maintenance Model for Railway Tracks
DEFF Research Database (Denmark)
Li, Rui; Wen, Min; Salling, Kim Bang
2015-01-01
presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time......). Five technical and economic aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...
An Operational Model for the Prediction of Jet Blast
2012-01-09
This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...
Septiani, Eka Lutfi; Widiyastuti, W.; Winardi, Sugeng; Machmudah, Siti; Nurtono, Tantular; Kusdianto
2016-02-01
Flame assisted spray dryer are widely uses for large-scale production of nanoparticles because of it ability. Numerical approach is needed to predict combustion and particles production in scale up and optimization process due to difficulty in experimental observation and relatively high cost. Computational Fluid Dynamics (CFD) can provide the momentum, energy and mass transfer, so that CFD more efficient than experiment due to time and cost. Here, two turbulence models, k-ɛ and Large Eddy Simulation were compared and applied in flame assisted spray dryer system. The energy sources for particle drying was obtained from combustion between LPG as fuel and air as oxidizer and carrier gas that modelled by non-premixed combustion in simulation. Silica particles was used to particle modelling from sol silica solution precursor. From the several comparison result, i.e. flame contour, temperature distribution and particle size distribution, Large Eddy Simulation turbulence model can provide the closest data to the experimental result.
[A predictive model on turnover intention of nurses in Korea].
Moon, Sook Ja; Han, Sang Sook
2011-10-01
The purpose of this study was to propose and test a predictive model that could explain and predict Korean nurses' turnover intentions. A survey using a structured questionnaire was conducted with 445 nurses in Korea. Six instruments were used in this model. The data were analyzed using SPSS 15.0 and Amos 7.0 program. Based on the constructed model, organizational commitment, and burnout were found to have a significant direct effect on turnover intention of nurses. In addition, factors such as empowerment, job satisfaction, and organizational commitment were found to indirectly affect turnover intention of nurse. The final modified model yielded χ²=402.30, pturnover intention in Korean nurses. Findings from this study can be used to design appropriate strategies to further decrease the nurses' turnover intention in Korea.
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...... model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model...
The predictive performance and stability of six species distribution models.
Directory of Open Access Journals (Sweden)
Ren-Yan Duan
Full Text Available Predicting species' potential geographical range by species distribution models (SDMs is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs.We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials. We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values.The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (p<0.05, while the associated standard deviations and coefficients of variation were larger for BIOCLIM and DOMAIN trials (p<0.05, and the 99% confidence intervals for AUC and Kappa values were narrower for MAHAL, RF, MAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points.According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.
Mathematical modelling methodologies in predictive food microbiology: a SWOT analysis.
Ferrer, Jordi; Prats, Clara; López, Daniel; Vives-Rego, Josep
2009-08-31
Predictive microbiology is the area of food microbiology that attempts to forecast the quantitative evolution of microbial populations over time. This is achieved to a great extent through models that include the mechanisms governing population dynamics. Traditionally, the models used in predictive microbiology are whole-system continuous models that describe population dynamics by means of equations applied to extensive or averaged variables of the whole system. Many existing models can be classified by specific criteria. We can distinguish between survival and growth models by seeing whether they tackle mortality or cell duplication. We can distinguish between empirical (phenomenological) models, which mathematically describe specific behaviour, and theoretical (mechanistic) models with a biological basis, which search for the underlying mechanisms driving already observed phenomena. We can also distinguish between primary, secondary and tertiary models, by examining their treatment of the effects of external factors and constraints on the microbial community. Recently, the use of spatially explicit Individual-based Models (IbMs) has spread through predictive microbiology, due to the current technological capacity of performing measurements on single individual cells and thanks to the consolidation of computational modelling. Spatially explicit IbMs are bottom-up approaches to microbial communities that build bridges between the description of micro-organisms at the cell level and macroscopic observations at the population level. They provide greater insight into the mesoscale phenomena that link unicellular and population levels. Every model is built in response to a particular question and with different aims. Even so, in this research we conducted a SWOT (Strength, Weaknesses, Opportunities and Threats) analysis of the different approaches (population continuous modelling and Individual-based Modelling), which we hope will be helpful for current and future
International Nuclear Information System (INIS)
Moon, Jin Woo; Yoon, Younju; Jeon, Young-Hoon; Kim, Sooyoung
2017-01-01
Highlights: • Initial ANN model was developed for predicting the time to the setback temperature. • Initial model was optimized for producing accurate output. • Optimized model proved its prediction accuracy. • ANN-based algorithms were developed and tested their performance. • ANN-based algorithms presented superior thermal comfort or energy efficiency. - Abstract: In this study, a temperature control algorithm was developed to apply a setback temperature predictively for the cooling system of a residential building during occupied periods by residents. An artificial neural network (ANN) model was developed to determine the required time for increasing the current indoor temperature to the setback temperature. This study involved three phases: development of the initial ANN-based prediction model, optimization and testing of the initial model, and development and testing of three control algorithms. The development and performance testing of the model and algorithm were conducted using TRNSYS and MATLAB. Through the development and optimization process, the final ANN model employed indoor temperature and the temperature difference between the current and target setback temperature as two input neurons. The optimal number of hidden layers, number of neurons, learning rate, and moment were determined to be 4, 9, 0.6, and 0.9, respectively. The tangent–sigmoid and pure-linear transfer function was used in the hidden and output neurons, respectively. The ANN model used 100 training data sets with sliding-window method for data management. Levenberg-Marquart training method was employed for model training. The optimized model had a prediction accuracy of 0.9097 root mean square errors when compared with the simulated results. Employing the ANN model, ANN-based algorithms maintained indoor temperatures better within target ranges. Compared to the conventional algorithm, the ANN-based algorithms reduced the duration of time, in which the indoor temperature
Planetary wave prediction: Benefits of tropical data and global models
Somerville, R. C. J.
1985-01-01
Skillful numerical predictions of midlatitude atmospheric planetary waves generally require both tropical data for the initial conditions and a global domain for the forecast model. The lack of either adequate tropical observations or a global domain typically leads to a significant degradation of forecast skill in middle latitudes within the first one to three days of the forecast period. These effects were first discovered by numerical experimentation. They were subsequently explained theoretically, and their importance for practical forecasting was confirmed in a series of prediction experiments using FGGE data.
Predictive modeling: potential application in prevention services.
Wilson, Moira L; Tumen, Sarah; Ota, Rissa; Simmers, Anthony G
2015-05-01
In 2012, the New Zealand Government announced a proposal to introduce predictive risk models (PRMs) to help professionals identify and assess children at risk of abuse or neglect as part of a preventive early intervention strategy, subject to further feasibility study and trialing. The purpose of this study is to examine technical feasibility and predictive validity of the proposal, focusing on a PRM that would draw on population-wide linked administrative data to identify newborn children who are at high priority for intensive preventive services. Data analysis was conducted in 2013 based on data collected in 2000-2012. A PRM was developed using data for children born in 2010 and externally validated for children born in 2007, examining outcomes to age 5 years. Performance of the PRM in predicting administratively recorded substantiations of maltreatment was good compared to the performance of other tools reviewed in the literature, both overall, and for indigenous Māori children. Some, but not all, of the children who go on to have recorded substantiations of maltreatment could be identified early using PRMs. PRMs should be considered as a potential complement to, rather than a replacement for, professional judgment. Trials are needed to establish whether risks can be mitigated and PRMs can make a positive contribution to frontline practice, engagement in preventive services, and outcomes for children. Deciding whether to proceed to trial requires balancing a range of considerations, including ethical and privacy risks and the risk of compounding surveillance bias. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
Estimation and prediction under local volatility jump-diffusion model
Kim, Namhyoung; Lee, Younhee
2018-02-01
Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.
Jensen's Inequality Predicts Effects of Environmental Variation
Jonathan J. Ruel; Matthew P. Ayres
1999-01-01
Many biologists now recognize that environmental variance can exert important effects on patterns and processes in nature that are independent of average conditions. Jenson's inequality is a mathematical proof that is seldom mentioned in the ecological literature but which provides a powerful tool for predicting some direct effects of environmental variance in...
Nonlinear Model Predictive Control with Constraint Satisfactions for a Quadcopter
Wang, Ye; Ramirez-Jaime, Andres; Xu, Feng; Puig, Vicenç
2017-01-01
This paper presents a nonlinear model predictive control (NMPC) strategy combined with constraint satisfactions for a quadcopter. The full dynamics of the quadcopter describing the attitude and position are nonlinear, which are quite sensitive to changes of inputs and disturbances. By means of constraint satisfactions, partial nonlinearities and modeling errors of the control-oriented model of full dynamics can be transformed into the inequality constraints. Subsequently, the quadcopter can be controlled by an NMPC controller with the updated constraints generated by constraint satisfactions. Finally, the simulation results applied to a quadcopter simulator are provided to show the effectiveness of the proposed strategy.
Shim, Jaemin; Hwang, Minki; Song, Jun-Seop; Lim, Byounghyun; Kim, Tae-Hoon; Joung, Boyoung; Kim, Sung-Hwan; Oh, Yong-Seog; Nam, Gi-Byung; On, Young Keun; Oh, Seil; Kim, Young-Hoon; Pak, Hui-Nam
2017-01-01
Objective: Radiofrequency catheter ablation for persistent atrial fibrillation (PeAF) still has a substantial recurrence rate. This study aims to investigate whether an AF ablation lesion set chosen using in-silico ablation (V-ABL) is clinically feasible and more effective than an empirically chosen ablation lesion set (Em-ABL) in patients with PeAF. Methods: We prospectively included 108 patients with antiarrhythmic drug-resistant PeAF (77.8% men, age 60.8 ± 9.9 years), and randomly assigned them to the V-ABL ( n = 53) and Em-ABL ( n = 55) groups. Five different in-silico ablation lesion sets [1 pulmonary vein isolation (PVI), 3 linear ablations, and 1 electrogram-guided ablation] were compared using heart-CT integrated AF modeling. We evaluated the feasibility, safety, and efficacy of V-ABL compared with that of Em-ABL. Results: The pre-procedural computing time for five different ablation strategies was 166 ± 11 min. In the Em-ABL group, the earliest terminating blinded in-silico lesion set matched with the Em-ABL lesion set in 21.8%. V-ABL was not inferior to Em-ABL in terms of procedure time ( p = 0.403), ablation time ( p = 0.510), and major complication rate ( p = 0.900). During 12.6 ± 3.8 months of follow-up, the clinical recurrence rate was 14.0% in the V-ABL group and 18.9% in the Em-ABL group ( p = 0.538). In Em-ABL group, clinical recurrence rate was significantly lower after PVI+posterior box+anterior linear ablation, which showed the most frequent termination during in-silico ablation (log-rank p = 0.027). Conclusions: V-ABL was feasible in clinical practice, not inferior to Em-ABL, and predicts the most effective ablation lesion set in patients who underwent PeAF ablation.
Heuristic Modeling for TRMM Lifetime Predictions
Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.
1996-01-01
Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.
A Computational Model for Predicting Gas Breakdown
Gill, Zachary
2017-10-01
Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.
Distributed model predictive control made easy
Negenborn, Rudy
2014-01-01
The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems. This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...
Using Empirical Models for Communication Prediction of Spacecraft
Quasny, Todd
2015-01-01
A viable communication path to a spacecraft is vital for its successful operation. For human spaceflight, a reliable and predictable communication link between the spacecraft and the ground is essential not only for the safety of the vehicle and the success of the mission, but for the safety of the humans on board as well. However, analytical models of these communication links are challenged by unique characteristics of space and the vehicle itself. For example, effects of radio frequency during high energy solar events while traveling through a solar array of a spacecraft can be difficult to model, and thus to predict. This presentation covers the use of empirical methods of communication link predictions, using the International Space Station (ISS) and its associated historical data as the verification platform and test bed. These empirical methods can then be incorporated into communication prediction and automation tools for the ISS in order to better understand the quality of the communication path given a myriad of variables, including solar array positions, line of site to satellites, position of the sun, and other dynamic structures on the outside of the ISS. The image on the left below show the current analytical model of one of the communication systems on the ISS. The image on the right shows a rudimentary empirical model of the same system based on historical archived data from the ISS.
Hidden Semi-Markov Models for Predictive Maintenance
Directory of Open Access Journals (Sweden)
Francesco Cartella
2015-01-01
Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.
Tollenaar, N.; van der Heijden, P.G.M.
2012-01-01
Using criminal population conviction histories of recent offenders, prediction mod els are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining and machine learning provide an improvement in predictive performance over classical statistical methods, namely logistic regression and linear discrim inant analysis. These models are compared ...
International Nuclear Information System (INIS)
Archbold, T.F.; Bower, R.B.; Polonis, D.H.
1982-04-01
The 1977 version of the Simpson-Puls-Dutton model appears to be the most amenable with respect to utilizing known or readily estimated quantities. The Pardee-Paton model requires extensive calculations involving estimated quantities. Recent observations by Koike and Suzuki on vanadium support the general assumption that crack growth in hydride forming metals is determined by the rate of hydride formation, and their hydrogen atmosphere-displacive transformation model is of potential interest in explaining hydrogen embrittlement in ferrous alloys as well as hydride formers. The discontinuous nature of cracking due to hydrogen embrittlement appears to depend very strongly on localized stress intensities, thereby pointing to the role of microstructure in influencing crack initiation, fracture mode and crack path. The initiation of hydrogen induced failures over relatively short periods of time can be characterized with fair reliability using measurements of the threshold stress intensity. The experimental conditions for determining K/sub Th/ and ΔK/sub Th/ are designed to ensure plane strain conditions in most cases. Plane strain test conditions may be viewed as a conservative basis for predicting delayed failure. The physical configuration of nuclear waste canisters may involve elastic/plastic conditions rather than a state of plane strain, especially with thin-walled vessels. Under these conditions, alternative predictive tests may be considered, including COD and R-curve methods. The double cantilever beam technique employed by Boyer and Spurr on titanium alloys offers advantages for examining hydrogen induced delayed failure over long periods of time. 88 references
Frequency weighted model predictive control of wind turbine
DEFF Research Database (Denmark)
Klauco, Martin; Poulsen, Niels Kjølstad; Mirzaei, Mahmood
2013-01-01
This work is focused on applying frequency weighted model predictive control (FMPC) on three blade horizontal axis wind turbine (HAWT). A wind turbine is a very complex, non-linear system influenced by a stochastic wind speed variation. The reduced dynamics considered in this work...... are the rotational degree of freedom of the rotor and the tower for-aft movement. The MPC design is based on a receding horizon policy and a linearised model of the wind turbine. Due to the change of dynamics according to wind speed, several linearisation points must be considered and the control design adjusted...... accordingly. In practice is very hard to measure the effective wind speed, this quantity will be estimated using measurements from the turbine itself. For this purpose stationary predictive Kalman filter has been used. Stochastic simulations of the wind turbine behaviour with applied frequency weighted model...
Regression Model to Predict Global Solar Irradiance in Malaysia
Directory of Open Access Journals (Sweden)
Hairuniza Ahmed Kutty
2015-01-01
Full Text Available A novel regression model is developed to estimate the monthly global solar irradiance in Malaysia. The model is developed based on different available meteorological parameters, including temperature, cloud cover, rain precipitate, relative humidity, wind speed, pressure, and gust speed, by implementing regression analysis. This paper reports on the details of the analysis of the effect of each prediction parameter to identify the parameters that are relevant to estimating global solar irradiance. In addition, the proposed model is compared in terms of the root mean square error (RMSE, mean bias error (MBE, and the coefficient of determination (R2 with other models available from literature studies. Seven models based on single parameters (PM1 to PM7 and five multiple-parameter models (PM7 to PM12 are proposed. The new models perform well, with RMSE ranging from 0.429% to 1.774%, R2 ranging from 0.942 to 0.992, and MBE ranging from −0.1571% to 0.6025%. In general, cloud cover significantly affects the estimation of global solar irradiance. However, cloud cover in Malaysia lacks sufficient influence when included into multiple-parameter models although it performs fairly well in single-parameter prediction models.
Comparison of joint modeling and landmarking for dynamic prediction under an illness-death model.
Suresh, Krithika; Taylor, Jeremy M G; Spratt, Daniel E; Daignault, Stephanie; Tsodikov, Alexander
2017-11-01
Dynamic prediction incorporates time-dependent marker information accrued during follow-up to improve personalized survival prediction probabilities. At any follow-up, or "landmark", time, the residual time distribution for an individual, conditional on their updated marker values, can be used to produce a dynamic prediction. To satisfy a consistency condition that links dynamic predictions at different time points, the residual time distribution must follow from a prediction function that models the joint distribution of the marker process and time to failure, such as a joint model. To circumvent the assumptions and computational burden associated with a joint model, approximate methods for dynamic prediction have been proposed. One such method is landmarking, which fits a Cox model at a sequence of landmark times, and thus is not a comprehensive probability model of the marker process and the event time. Considering an illness-death model, we derive the residual time distribution and demonstrate that the structure of the Cox model baseline hazard and covariate effects under the landmarking approach do not have simple form. We suggest some extensions of the landmark Cox model that should provide a better approximation. We compare the performance of the landmark models with joint models using simulation studies and cognitive aging data from the PAQUID study. We examine the predicted probabilities produced under both methods using data from a prostate cancer study, where metastatic clinical failure is a time-dependent covariate for predicting death following radiation therapy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fuzzy predictive filtering in nonlinear economic model predictive control for demand response
DEFF Research Database (Denmark)
Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.
2016-01-01
The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...
Fridgeirsdottir, Gudrun A; Harris, Robert J; Dryden, Ian L; Fischer, Peter M; Roberts, Clive J
2018-03-29
Solid dispersions can be a successful way to enhance the bioavailability of poorly soluble drugs. Here 60 solid dispersion formulations were produced using ten chemically diverse, neutral, poorly soluble drugs, three commonly used polymers, and two manufacturing techniques, spray-drying and melt extrusion. Each formulation underwent a six-month stability study at accelerated conditions, 40 °C and 75% relative humidity (RH). Significant differences in times to crystallization (onset of crystallization) were observed between both the different polymers and the two processing methods. Stability from zero days to over one year was observed. The extensive experimental data set obtained from this stability study was used to build multiple linear regression models to correlate physicochemical properties of the active pharmaceutical ingredients (API) with the stability data. The purpose of these models is to indicate which combination of processing method and polymer carrier is most likely to give a stable solid dispersion. Six quantitative mathematical multiple linear regression-based models were produced based on selection of the most influential independent physical and chemical parameters from a set of 33 possible factors, one model for each combination of polymer and processing method, with good predictability of stability. Three general rules are proposed from these models for the formulation development of suitably stable solid dispersions. Namely, increased stability is correlated with increased glass transition temperature ( T g ) of solid dispersions, as well as decreased number of H-bond donors and increased molecular flexibility (such as rotatable bonds and ring count) of the drug molecule.
Directory of Open Access Journals (Sweden)
Nataša Šarlija
2017-01-01
Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.
Validating predictions from climate envelope models
Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.
2013-01-01
Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.
Validating predictions from climate envelope models.
Directory of Open Access Journals (Sweden)
James I Watling
Full Text Available Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species' distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967-1971 (t1 and evaluated using occurrence data from 1998-2002 (t2. Model sensitivity (the ability to correctly classify species presences was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on
HESS Opinions: Hydrologic predictions in a changing environment: behavioral modeling
Directory of Open Access Journals (Sweden)
S. J. Schymanski
2011-02-01
Full Text Available Most hydrological models are valid at most only in a few places and cannot be reasonably transferred to other places or to far distant time periods. Transfer in space is difficult because the models are conditioned on past observations at particular places to define parameter values and unobservable processes that are needed to fully characterize the structure and functioning of the landscape. Transfer in time has to deal with the likely temporal changes to both parameters and processes under future changed conditions. This remains an important obstacle to addressing some of the most urgent prediction questions in hydrology, such as prediction in ungauged basins and prediction under global change. In this paper, we propose a new approach to catchment hydrological modeling, based on universal principles that do not change in time and that remain valid across many places. The key to this framework, which we call behavioral modeling, is to assume that there are universal and time-invariant organizing principles that can be used to identify the most appropriate model structure (including parameter values and responses for a given ecosystem at a given moment in time. These organizing principles may be derived from fundamental physical or biological laws, or from empirical laws that have been demonstrated to be time-invariant and to hold at many places and scales. Much fundamental research remains to be undertaken to help discover these organizing principles on the basis of exploration of observed patterns of landscape structure and hydrological behavior and their interpretation as legacy effects of past co-evolution of climate, soils, topography, vegetation and humans. Our hope is that the new behavioral modeling framework will be a step forward towards a new vision for hydrology where models are capable of more confidently predicting the behavior of catchments beyond what has been observed or experienced before.
Classification models for the prediction of clinicians' information needs.
Del Fiol, Guilherme; Haug, Peter J
2009-02-01
Clinicians face numerous information needs during patient care activities and most of these needs are not met. Infobuttons are information retrieval tools that help clinicians to fulfill their information needs by providing links to on-line health information resources from within an electronic medical record (EMR) system. The aim of this study was to produce classification models based on medication infobutton usage data to predict the medication-related content topics (e.g., dose, adverse effects, drug interactions, patient education) that a clinician is most likely to choose while entering medication orders in a particular clinical context. We prepared a dataset with 3078 infobutton sessions and 26 attributes describing characteristics of the user, the medication, and the patient. In these sessions, users selected one out of eight content topics. Automatic attribute selection methods were then applied to the dataset to eliminate redundant and useless attributes. The reduced dataset was used to produce nine classification models from a set of state-of-the-art machine learning algorithms. Finally, the performance of the models was measured and compared. Area under the ROC curve (AUC) and agreement (kappa) between the content topics predicted by the models and those chosen by clinicians in each infobutton session. The performance of the models ranged from 0.49 to 0.56 (kappa). The AUC of the best model ranged from 0.73 to 0.99. The best performance was achieved when predicting choice of the adult dose, pediatric dose, patient education, and pregnancy category content topics. The results suggest that classification models based on infobutton usage data are a promising method for the prediction of content topics that a clinician would choose to answer patient care questions while using an EMR system.
Modelling Chemical Reasoning to Predict and Invent Reactions.
Segler, Marwin H S; Waller, Mark P
2017-05-02
The ability to reason beyond established knowledge allows organic chemists to solve synthetic problems and invent novel transformations. Herein, we propose a model that mimics chemical reasoning, and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180 000 randomly selected binary reactions. The data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-)discovering novel transformations (even including transition metal-catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph and because each single reaction prediction is typically achieved in a sub-second time frame, the model can be used as a high-throughput generator of reaction hypotheses for reaction discovery. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Modeling and predicting historical volatility in exchange rate markets
Lahmiri, Salim
2017-04-01
Volatility modeling and forecasting of currency exchange rate is an important task in several business risk management tasks; including treasury risk management, derivatives pricing, and portfolio risk evaluation. The purpose of this study is to present a simple and effective approach for predicting historical volatility of currency exchange rate. The approach is based on a limited set of technical indicators as inputs to the artificial neural networks (ANN). To show the effectiveness of the proposed approach, it was applied to forecast US/Canada and US/Euro exchange rates volatilities. The forecasting results show that our simple approach outperformed the conventional GARCH and EGARCH with different distribution assumptions, and also the hybrid GARCH and EGARCH with ANN in terms of mean absolute error, mean of squared errors, and Theil's inequality coefficient. Because of the simplicity and effectiveness of the approach, it is promising for US currency volatility prediction tasks.
Model of lifetime prediction - Study of the behaviour of polymers and organic matrix composites
International Nuclear Information System (INIS)
Colin, X.
2009-01-01
The team 'Aging of Organic Materials' of the Process and Engineering Laboratory in Mechanics and Materials (Arts et Metiers, ParisTech) has developed the model of lifetime prediction for the prediction of the behaviour of polymers and organic composites. This model has already given evidence of a real predictive mean for various industrial applications, as for instance the prediction of a rupture under the coupled effect of a mechanical load and a chemical degradation. (O.M.)
A multifocal electroretinogram model predicting the development of diabetic retinopathy.
Bearse, Marcus A; Adams, Anthony J; Han, Ying; Schneck, Marilyn E; Ng, Jason; Bronson-Castain, Kevin; Barez, Shirin
2006-09-01
The prevalence of diabetes has been accelerating at an alarming rate in the last decade; some describe it as an epidemic. Diabetic eye complications are the leading cause of blindness in adults aged 25-74 in the United States. Early diagnosis and development of effective preventatives and treatments of diabetic retinopathy are essential to save sight. We describe efforts to establish functional indicators of retinal health and predictors of diabetic retinopathy. These indicators and predictors will be needed as markers of the efficacy of new therapies. Clinical trials aimed at either prevention or early treatments will rely heavily on the discovery of sensitive methods to identify patients and retinal locations at risk, as well as to evaluate treatment effects. We report on recent success in revealing local functional changes of the retina with the multifocal electroretinogram (mfERG). This objective measure allows the simultaneous recording of responses from over 100 small retinal patches across the central 45 degrees field. We describe the sensitivity of mfERG implicit time measurement for revealing functional alterations of the retina in diabetes, the local correspondence between functional (mfERG) and structural (vascular) abnormalities in eyes with early nonproliferative retinopathy, and longitudinal studies to formulate models to predict the retinal sites of future retinopathic signs. A multivariate model including mfERG implicit time delays and 'person' risk factors achieved 86% sensitivity and 84% specificity for prediction of new retinopathy development over one year at specific locations in eyes with some retinopathy at baseline. A preliminary test of the model yielded very positive results. This model appears to be the first to predict, quantitatively, the retinal locations of new nonproliferative diabetic retinopathy development over a one-year period. In a separate study, the predictive power of a model was assessed over one- and two-year follow
Model Predictive Control for an Industrial SAG Mill
DEFF Research Database (Denmark)
Ohan, Valeriu; Steinke, Florian; Metzger, Michael
2012-01-01
We discuss Model Predictive Control (MPC) based on ARX models and a simple lower order disturbance model. The advantage of this MPC formulation is that it has few tuning parameters and is based on an ARX prediction model that can readily be identied using standard technologies from system identic...
Improved Prediction of the Doppler Effect in TRISO Fuel
Energy Technology Data Exchange (ETDEWEB)
J. Ortensi; A.M. Ougouag
2009-05-01
The Doppler feedback mechanism is a major contributor to the passive safety of gas-cooled, graphite-moderated High Temperature Reactors that use fuel based on TRISO particles. It follows that the correct prediction of the magnitude and time-dependence of this feedback effect is essential to the conduct of safety analyses for these reactors. Since the effect is directly dependent on the actual temperature reached by the fuel during transients, the underlying phenomena of heat transfer and temperature rise must be correctly predicted. This paper presents an improved model for the TRISO particle and its thermal behavior during transients. The improved approach incorporates an explicit TRISO heat conduction model to better quantify the time dependence of the temperature in the various layers of the TRISO particle, including its fuel central zone. There follows a better treatment of the Doppler Effect within said fuel zone. The new model is based on a 1-D analytic solution for composite media using the Green’s function technique. The modeling improvement takes advantage of some of the physical behavior of TRISO fuel under irradiation and includes a distinctive look at the physics of the neutronic Doppler Effect. The new methodology has been implemented within the coupled R-Z nodal diffusion code CYNOD-THERMIX. The new model has been applied to the analysis of earthquakes (presented in a companion paper). In this paper, the model is applied to the control rod ejection event, as specified in the OECD PBMR-400 benchmark, but with temperature dependent thermal properties. The results obtained for this transient using the enhanced code are a considerable improvement over the predictions of the original code. The incorporation of the enhanced model shows that the Doppler Effect plays a more significant role than predicted by the original unenhanced model based on the THERMIX homogenized fuel region model. The new model shows that the overall energy generation during the rod
Uncertainties in spatially aggregated predictions from a logistic regression model
Horssen, P.W. van; Pebesma, E.J.; Schot, P.P.
2002-01-01
This paper presents a method to assess the uncertainty of an ecological spatial prediction model which is based on logistic regression models, using data from the interpolation of explanatory predictor variables. The spatial predictions are presented as approximate 95% prediction intervals. The
Dealing with missing predictor values when applying clinical prediction models.
Janssen, K.J.; Vergouwe, Y.; Donders, A.R.T.; Harrell Jr, F.E.; Chen, Q.; Grobbee, D.E.; Moons, K.G.
2009-01-01
BACKGROUND: Prediction models combine patient characteristics and test results to predict the presence of a disease or the occurrence of an event in the future. In the event that test results (predictor) are unavailable, a strategy is needed to help users applying a prediction model to deal with
Modeling of Pressure Effects in HVDC Cables
DEFF Research Database (Denmark)
Szabo, Peter; Hassager, Ole; Strøbech, Esben
1999-01-01
A model is developed for the prediction of pressure effects in HVDC mass impregnatedcables as a result of temperature changes.To test the model assumptions, experiments were performed in cable like geometries.It is concluded that the model may predict the formation of gas cavities....
Directory of Open Access Journals (Sweden)
Jing Lu
2014-11-01
Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.
Predictions and the Limiting Effects of Prequestions.
Shanahan, Timothy
A study examined the effects of teacher questioning and student prediction (purpose-setting procedures) upon the reading comprehension of 188 students in grades 3 through 6. Thirty-two constructed-answer questions were developed for use with an article about kangaroos, written in an expository style and approximately 900 words in length. Half of…
Predictive capabilities of various constitutive models for arterial tissue.
Schroeder, Florian; Polzer, Stanislav; Slažanský, Martin; Man, Vojtěch; Skácel, Pavel
2018-02-01
Aim of this study is to validate some constitutive models by assessing their capabilities in describing and predicting uniaxial and biaxial behavior of porcine aortic tissue. 14 samples from porcine aortas were used to perform 2 uniaxial and 5 biaxial tensile tests. Transversal strains were furthermore stored for uniaxial data. The experimental data were fitted by four constitutive models: Holzapfel-Gasser-Ogden model (HGO), model based on generalized structure tensor (GST), Four-Fiber-Family model (FFF) and Microfiber model. Fitting was performed to uniaxial and biaxial data sets separately and descriptive capabilities of the models were compared. Their predictive capabilities were assessed in two ways. Firstly each model was fitted to biaxial data and its accuracy (in term of R 2 and NRMSE) in prediction of both uniaxial responses was evaluated. Then this procedure was performed conversely: each model was fitted to both uniaxial tests and its accuracy in prediction of 5 biaxial responses was observed. Descriptive capabilities of all models were excellent. In predicting uniaxial response from biaxial data, microfiber model was the most accurate while the other models showed also reasonable accuracy. Microfiber and FFF models were capable to reasonably predict biaxial responses from uniaxial data while HGO and GST models failed completely in this task. HGO and GST models are not capable to predict biaxial arterial wall behavior while FFF model is the most robust of the investigated constitutive models. Knowledge of transversal strains in uniaxial tests improves robustness of constitutive models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Robust Model Predictive Control of a Wind Turbine
DEFF Research Database (Denmark)
Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik
2012-01-01
, a new sensor is introduced in the EKF to give faster estimations. Wind speed estimation error is used to assess uncertainties in the linearized model. Significant uncertainties are considered to be in the gain of the system (B matrix of the state space model). Therefore this special structure......In this work the problem of robust model predictive control (robust MPC) of a wind turbine in the full load region is considered. A minimax robust MPC approach is used to tackle the problem. Nonlinear dynamics of the wind turbine are derived by combining blade element momentum (BEM) theory...... and first principle modeling of the turbine flexible structure. Thereafter the nonlinear model is linearized using Taylor series expansion around system operating points. Operating points are determined by effective wind speed and an extended Kalman filter (EKF) is employed to estimate this. In addition...
MOTORCYCLE CRASH PREDICTION MODEL FOR NON-SIGNALIZED INTERSECTIONS
Directory of Open Access Journals (Sweden)
S. HARNEN
2003-01-01
Full Text Available This paper attempts to develop a prediction model for motorcycle crashes at non-signalized intersections on urban roads in Malaysia. The Generalized Linear Modeling approach was used to develop the model. The final model revealed that an increase in motorcycle and non-motorcycle flows entering an intersection is associated with an increase in motorcycle crashes. Non-motorcycle flow on major road had the greatest effect on the probability of motorcycle crashes. Approach speed, lane width, number of lanes, shoulder width and land use were also found to be significant in explaining motorcycle crashes. The model should assist traffic engineers to decide the need for appropriate intersection treatment that specifically designed for non-exclusive motorcycle lane facilities.
Predictive models for moving contact line flows
Rame, Enrique; Garoff, Stephen
2003-01-01
Modeling flows with moving contact lines poses the formidable challenge that the usual assumptions of Newtonian fluid and no-slip condition give rise to a well-known singularity. This singularity prevents one from satisfying the contact angle condition to compute the shape of the fluid-fluid interface, a crucial calculation without which design parameters such as the pressure drop needed to move an immiscible 2-fluid system through a solid matrix cannot be evaluated. Some progress has been made for low Capillary number spreading flows. Combining experimental measurements of fluid-fluid interfaces very near the moving contact line with an analytical expression for the interface shape, we can determine a parameter that forms a boundary condition for the macroscopic interface shape when Ca much les than l. This parameter, which plays the role of an "apparent" or macroscopic dynamic contact angle, is shown by the theory to depend on the system geometry through the macroscopic length scale. This theoretically established dependence on geometry allows this parameter to be "transferable" from the geometry of the measurement to any other geometry involving the same material system. Unfortunately this prediction of the theory cannot be tested on Earth.
Toward, Martin G R; Griffin, Michael J
2010-01-01
Models of the vertical apparent mass of the human body are mostly restricted to a sitting posture unsupported by a backrest and ignore the variations in apparent mass associated with changes in posture and changes in the magnitude of vibration. Using findings from experimental research, this study fitted a single degree-of-freedom lumped parameter model to the measured vertical apparent mass of the body measured with a range of sitting postures and vibration magnitudes. The resulting model reflects the effects of reclining a rigid backrest or reclining a foam backrest (from 0 to 30 degrees), the effects of moving the hands from the lap to a steering wheel, the effects of moving the horizontal position of the feet, and the effects of vibration magnitude (from 0.125 to 1.6 ms(-2) r.m.s.). The error between the modelled and the measured apparent mass was minimised, for both the apparent masses of individual subjects and the median apparent masses of groups of 12 subjects, for each sitting posture and each vibration magnitude. Trends in model parameters, the damping ratios, and the damped natural frequencies were identified as a function of the model variables and show the effects of posture and vibration magnitude on body dynamics. For example, contact with a rigid backrest increased the derived damped natural frequency of the principal resonance as a result of reduced moving mass and increased stiffness. When the rigid backrest was reclined from 0 to 30º, the damping decreased and the resonance frequency increased as a result of reduced moving mass. It is concluded that, by appropriate variations in model parameters, a single degree-of-freedom model can provide a useful fit to the vertical apparent mass of the human body over a wide range of postures and vibration magnitudes. When measuring or modelling seat transmissibility, it may be difficult to justify an apparent mass model with more than a single degree-of-freedom if it does not reflect the large influences of
Developmental prediction model for early alcohol initiation in Dutch adolescents
Geels, L.M.; Vink, J.M.; Beijsterveldt, C.E.M. van; Bartels, M.; Boomsma, D.I.
2013-01-01
Objective: Multiple factors predict early alcohol initiation in teenagers. Among these are genetic risk factors, childhood behavioral problems, life events, lifestyle, and family environment. We constructed a developmental prediction model for alcohol initiation below the Dutch legal drinking age
Enhanced pid vs model predictive control applied to bldc motor
Gaya, M. S.; Muhammad, Auwal; Aliyu Abdulkadir, Rabiu; Salim, S. N. S.; Madugu, I. S.; Tijjani, Aminu; Aminu Yusuf, Lukman; Dauda Umar, Ibrahim; Khairi, M. T. M.
2018-01-01
BrushLess Direct Current (BLDC) motor is a multivariable and highly complex nonlinear system. Variation of internal parameter values with environment or reference signal increases the difficulty in controlling the BLDC effectively. Advanced control strategies (like model predictive control) often have to be integrated to satisfy the control desires. Enhancing or proper tuning of a conventional algorithm results in achieving the desired performance. This paper presents a performance comparison of Enhanced PID and Model Predictive Control (MPC) applied to brushless direct current motor. The simulation results demonstrated that the PSO-PID is slightly better than the PID and MPC in tracking the trajectory of the reference signal. The proposed scheme could be useful algorithms for the system.
Addressing Conceptual Model Uncertainty in the Evaluation of Model Prediction Errors
Carrera, J.; Pool, M.
2014-12-01
Model predictions are uncertain because of errors in model parameters, future forcing terms, and model concepts. The latter remain the largest and most difficult to assess source of uncertainty in long term model predictions. We first review existing methods to evaluate conceptual model uncertainty. We argue that they are highly sensitive to the ingenuity of the modeler, in the sense that they rely on the modeler's ability to propose alternative model concepts. Worse, we find that the standard practice of stochastic methods leads to poor, potentially biased and often too optimistic, estimation of actual model errors. This is bad news because stochastic methods are purported to properly represent uncertainty. We contend that the problem does not lie on the stochastic approach itself, but on the way it is applied. Specifically, stochastic inversion methodologies, which demand quantitative information, tend to ignore geological understanding, which is conceptually rich. We illustrate some of these problems with the application to Mar del Plata aquifer, where extensive data are available for nearly a century. Geologically based models, where spatial variability is handled through zonation, yield calibration fits similar to geostatiscally based models, but much better predictions. In fact, the appearance of the stochastic T fields is similar to the geologically based models only in areas with high density of data. We take this finding to illustrate the ability of stochastic models to accommodate many data, but also, ironically, their inability to address conceptual model uncertainty. In fact, stochastic model realizations tend to be too close to the "most likely" one (i.e., they do not really realize the full conceptualuncertainty). The second part of the presentation is devoted to argue that acknowledging model uncertainty may lead to qualitatively different decisions than just working with "most likely" model predictions. Therefore, efforts should concentrate on
Probabilistic predictive modelling of carbon nanocomposites for medical implants design.
Chua, Matthew; Chui, Chee-Kong
2015-04-01
Modelling of the mechanical properties of carbon nanocomposites based on input variables like percentage weight of Carbon Nanotubes (CNT) inclusions is important for the design of medical implants and other structural scaffolds. Current constitutive models for the mechanical properties of nanocomposites may not predict well due to differences in conditions, fabrication techniques and inconsistencies in reagents properties used across industries and laboratories. Furthermore, the mechanical properties of the designed products are not deterministic, but exist as a probabilistic range. A predictive model based on a modified probabilistic surface response algorithm is proposed in this paper to address this issue. Tensile testing of three groups of different CNT weight fractions of carbon nanocomposite samples displays scattered stress-strain curves, with the instantaneous stresses assumed to vary according to a normal distribution at a specific strain. From the probabilistic density function of the experimental data, a two factors Central Composite Design (CCD) experimental matrix based on strain and CNT weight fraction input with their corresponding stress distribution was established. Monte Carlo simulation was carried out on this design matrix to generate a predictive probabilistic polynomial equation. The equation and method was subsequently validated with more tensile experiments and Finite Element (FE) studies. The method was subsequently demonstrated in the design of an artificial tracheal implant. Our algorithm provides an effective way to accurately model the mechanical properties in implants of various compositions based on experimental data of samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
Data-Driven Modeling and Prediction of Arctic Sea Ice
Kondrashov, Dmitri; Chekroun, Mickael; Ghil, Michael
2016-04-01
We present results of data-driven predictive analyses of sea ice over the main Arctic regions. Our approach relies on the Multilayer Stochastic Modeling (MSM) framework of Kondrashov, Chekroun and Ghil [Physica D, 2015] and it leads to probabilistic prognostic models of sea ice concentration (SIC) anomalies on seasonal time scales. This approach is applied to monthly time series of state-of-the-art data-adaptive decompositions of SIC and selected climate variables over the Arctic. We evaluate the predictive skill of MSM models by performing retrospective forecasts with "no-look ahead" for up to 6-months ahead. It will be shown in particular that the memory effects included intrinsically in the formulation of our non-Markovian MSM models allow for improvements of the prediction skill of large-amplitude SIC anomalies in certain Arctic regions on the one hand, and of September Sea Ice Extent, on the other. Further improvements allowed by the MSM framework will adopt a nonlinear formulation and explore next-generation data-adaptive decompositions, namely modification of Principal Oscillation Patterns (POPs) and rotated Multichannel Singular Spectrum Analysis (M-SSA).
Seasonal predictability of Kiremt rainfall in coupled general circulation models
Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen
2017-11-01
The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.
Ozturk, Ismet; Tornuk, Fatih; Sagdic, Osman; Kisi, Ozgur
2012-07-01
In this study, we studied the effects of some plant hydrosols obtained from bay leaf, black cumin, rosemary, sage, and thyme in reducing Listeria monocytogenes on the surface of fresh-cut apple cubes. Adaptive neuro-fuzzy inference system (ANFIS), artificial neural network (ANN), and multiple linear regression (MLR) models were used for describing the behavior of L. monocytogenes against the hydrosol treatments. Approximately 1-1.5 log CFU/g decreases in L. monocytogenes counts were observed after individual hydrosol treatments for 20 min. By extending the treatment time to 60 min, thyme, sage, or rosemary hydrosols eliminated L. monocytogenes, whereas black cumin and bay leaf hydrosols did not lead to additional reductions. In addition to antibacterial measurements, the abilities of ANFIS, ANN, and MLR models were compared with respect to estimation of the survival of L. monocytogenes. The root mean square error, mean absolute error, and determination coefficient statistics were used as comparison criteria. The comparison results indicated that the ANFIS model performed the best for estimating the effects of the plant hydrosols on L. monocytogenes counts. The ANN model was also effective; the MLR model was found to be poor at estimating L. monocytogenes numbers.
MODELLING OF DYNAMIC SPEED LIMITS USING THE MODEL PREDICTIVE CONTROL
Directory of Open Access Journals (Sweden)
Andrey Borisovich Nikolaev
2017-09-01
Full Text Available The article considers the issues of traffic management using intelligent system “Car-Road” (IVHS, which consist of interacting intelligent vehicles (IV and intelligent roadside controllers. Vehicles are organized in convoy with small distances between them. All vehicles are assumed to be fully automated (throttle control, braking, steering. Proposed approaches for determining speed limits for traffic cars on the motorway using a model predictive control (MPC. The article proposes an approach to dynamic speed limit to minimize the downtime of vehicles in traffic.
On the predictiveness of single-field inflationary models
Burgess, C. P.; Patil, Subodh P.; Trott, Michael
2014-06-01
We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for A S , r and n s are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in principle) for a slightly larger range of Higgs masses. We comment on the origin of the various UV scales that arise at large field values for the SM Higgs, clarifying cut off scale arguments by further developing the formalism of a non-linear realization of SU L (2) × U(1) in curved space. We discuss the interesting fact that, outside of Higgs Inflation, the effect of a non-minimal coupling to gravity, even in the SM, results in a non-linear EFT for the Higgs sector. Finally, we briefly comment on post BICEP2 attempts to modify the Higgs Inflation scenario.
The effect of genealogy-based haplotypes on genomic prediction
DEFF Research Database (Denmark)
Edriss, Vahid; Fernando, Rohan L.; Su, Guosheng
2013-01-01
on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. Methods A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using...... local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (pi) of the haplotype covariates had zero effect......, i.e. a Bayesian mixture method. Results About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some...
Muñoz-Rojas, Miriam; Doro, Luca; Ledda, Luigi; Francaviglia, Rosa
2014-05-01
CarboSOIL is an empirical model based on regression techniques and developed to predict soil organic carbon contents (SOC) at standard soil depths of 0-25, 25-50 and 50-75 cm (Muñoz-Rojas et al., 2013). The model was applied to a study area of north-eastern Sardinia (Italy) (40° 46'N, 9° 10'E, mean altitude 285 m a.s.l.), characterized by extensive agro-silvo-pastoral systems which are typical of similar areas of the Mediterranean basin (e.g. the Iberian peninsula). The area has the same soil type (Haplic Endoleptic Cambisols, Dystric according to WRB), while cork oak forest (Quercus suber L.) is the potential native vegetation which has been converted to managed land with pastures and vineyards in recent years (Lagomarsino et al., 2011; Francaviglia et al., 2012; Bagella et al, 2013; Francaviglia et al., 2014). Six land uses with different levels of cropping intensification were compared: Tilled vineyards (TV); No-tilled grassed vineyards (GV); Hay crop (HC); Pasture (PA); Cork oak forest (CO) and Semi-natural systems (SN). The HC land use includes oats, Italian ryegrass and annual clovers or vetch for 5 years and intercropped by spontaneous herbaceous vegetation in the sixth year. The PA land use is 5 years of spontaneous herbaceous vegetation, and one year of intercropping with oats, Italian ryegrass and annual clovers or vetch cultivated as a hay crop. The SN land use (scrublands, Mediterranean maquis and Helichrysum meadows) arise from the natural re-vegetation of former vineyards which have been set-aside probably due to the low grape yields and the high cost of modern tillage equipment. Both PA and HC are grazed for some months during the year, and include scattered cork-oak trees, which are key components of the 'Dehesa'-type landscape (grazing system with Quercus L.) typical of this area of Sardinia and other areas of southern Mediterranean Europe. Dehesas are often converted to more profitable land uses such as vineyards (Francaviglia et al., 2012; Mu
Link Prediction in Weighted Networks: A Weighted Mutual Information Model.
Directory of Open Access Journals (Sweden)
Boyao Zhu
Full Text Available The link-prediction problem is an open issue in data mining and knowledge discovery, which attracts researchers from disparate scientific communities. A wealth of methods have been proposed to deal with this problem. Among these approaches, most are applied in unweighted networks, with only a few taking the weights of links into consideration. In this paper, we present a weighted model for undirected and weighted networks based on the mutual information of local network structures, where link weights are applied to further enhance the distinguishable extent of candidate links. Empirical experiments are conducted on four weighted networks, and results show that the proposed method can provide more accurate predictions than not only traditional unweighted indices but also typical weighted indices. Furthermore, some in-depth discussions on the effects of weak ties in link prediction as well as the potential to predict link weights are also given. This work may shed light on the design of algorithms for link prediction in weighted networks.
Vehicle Driving Risk Prediction Based on Markov Chain Model
Directory of Open Access Journals (Sweden)
Xiaoxia Xiong
2018-01-01
Full Text Available A driving risk status prediction algorithm based on Markov chain is presented. Driving risk states are classified using clustering techniques based on feature variables describing the instantaneous risk levels within time windows, where instantaneous risk levels are determined in time-to-collision and time-headway two-dimension plane. Multinomial Logistic models with recursive feature variable estimation method are developed to improve the traditional state transition probability estimation, which also takes into account the comprehensive effects of driving behavior, traffic, and road environment factors on the evolution of driving risk status. The “100-car” natural driving data from Virginia Tech is employed for the training and validation of the prediction model. The results show that, under the 5% false positive rate, the prediction algorithm could have high prediction accuracy rate for future medium-to-high driving risks and could meet the timeliness requirement of collision avoidance warning. The algorithm could contribute to timely warning or auxiliary correction to drivers in the approaching-danger state.
Development of a Predictive Model for Induction Success of Labour
Directory of Open Access Journals (Sweden)
Cristina Pruenza
2018-03-01
Full Text Available Induction of the labour process is an extraordinarily common procedure used in some pregnancies. Obstetricians face the need to end a pregnancy, for medical reasons usually (maternal or fetal requirements or less frequently, social (elective inductions for convenience. The success of induction procedure is conditioned by a multitude of maternal and fetal variables that appear before or during pregnancy or birth process, with a low predictive value. The failure of the induction process involves performing a caesarean section. This project arises from the clinical need to resolve a situation of uncertainty that occurs frequently in our clinical practice. Since the weight of clinical variables is not adequately weighted, we consider very interesting to know a priori the possibility of success of induction to dismiss those inductions with high probability of failure, avoiding unnecessary procedures or postponing end if possible. We developed a predictive model of induced labour success as a support tool in clinical decision making. Improve the predictability of a successful induction is one of the current challenges of Obstetrics because of its negative impact. The identification of those patients with high chances of failure, will allow us to offer them better care improving their health outcomes (adverse perinatal outcomes for mother and newborn, costs (medication, hospitalization, qualified staff and patient perceived quality. Therefore a Clinical Decision Support System was developed to give support to the Obstetricians. In this article, we had proposed a robust method to explore and model a source of clinical information with the purpose of obtaining all possible knowledge. Generally, in classification models are difficult to know the contribution that each attribute provides to the model. We had worked in this direction to offer transparency to models that may be considered as black boxes. The positive results obtained from both the
Predictability in models of the atmospheric circulation
Houtekamer, P.L.
1992-01-01
It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error
Required Collaborative Work in Online Courses: A Predictive Modeling Approach
Smith, Marlene A.; Kellogg, Deborah L.
2015-01-01
This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…
Models for predicting compressive strength and water absorption of ...
African Journals Online (AJOL)
This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...
Predictive model for determining the quality of a call
Voznak, M.; Rozhon, J.; Partila, P.; Safarik, J.; Mikulec, M.; Mehic, M.
2014-05-01
In this paper the predictive model for speech quality estimation is described. This model allows its user to gain the information about the speech quality in VoIP networks without the need of performing the actual call and the consecutive time consuming sound file evaluation. This rapidly increases usability of the speech quality measurement especially in high load networks, where the actual processing of all calls is rendered difficult or even impossible. This model can reach its results that are highly conformant with the PESQ algorithm only based on the network state parameters that are easily obtainable by the commonly used software tools. Experiments were carried out to investigate whether different languages (English, Czech) have an effect on perceived voice quality for the same network conditions and the language factor was incorporated directly into the model.
Srinivasan, M; Shetty, N; Gadekari, S; Thunga, G; Rao, K; Kunhikatta, V
2017-07-01
Severity or mortality prediction of nosocomial pneumonia could aid in the effective triage of patients and assisting physicians. To compare various severity assessment scoring systems for predicting intensive care unit (ICU) mortality in nosocomial pneumonia patients. A prospective cohort study was conducted in a tertiary care university-affiliated hospital in Manipal, India. One hundred patients with nosocomial pneumonia, admitted in the ICUs who developed pneumonia after >48h of admission, were included. The Nosocomial Pneumonia Mortality Prediction (NPMP) model, developed in our hospital, was compared with Acute Physiology and Chronic Health Evaluation II (APACHE II), Mortality Probability Model II (MPM 72 II), Simplified Acute Physiology Score II (SAPS II), Multiple Organ Dysfunction Score (MODS), Sequential Organ Failure Assessment (SOFA), Clinical Pulmonary Infection Score (CPIS), Ventilator-Associated Pneumonia Predisposition, Insult, Response, Organ dysfunction (VAP-PIRO). Data and clinical variables were collected on the day of pneumonia diagnosis. The outcome for the study was ICU mortality. The sensitivity and specificity of the various scoring systems was analysed by plotting receiver operating characteristic (ROC) curves and computing the area under the curve for each of the mortality predicting tools. NPMP, APACHE II, SAPS II, MPM 72 II, SOFA, and VAP-PIRO were found to have similar and acceptable discrimination power as assessed by the area under the ROC curve. The AUC values for the above scores ranged from 0.735 to 0.762. CPIS and MODS showed least discrimination. NPMP is a specific tool to predict mortality in nosocomial pneumonia and is comparable to other standard scores. Copyright © 2017 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
Analytical model for local scour prediction around hydrokinetic turbine foundations
Musa, M.; Heisel, M.; Hill, C.; Guala, M.
2017-12-01
Marine and Hydrokinetic renewable energy is an emerging sustainable and secure technology which produces clean energy harnessing water currents from mostly tidal and fluvial waterways. Hydrokinetic turbines are typically anchored at the bottom of the channel, which can be erodible or non-erodible. Recent experiments demonstrated the interactions between operating turbines and an erodible surface with sediment transport, resulting in a remarkable localized erosion-deposition pattern significantly larger than those observed by static in-river construction such as bridge piers, etc. Predicting local scour geometry at the base of hydrokinetic devices is extremely important during foundation design, installation, operation, and maintenance (IO&M), and long-term structural integrity. An analytical modeling framework is proposed applying the phenomenological theory of turbulence to the flow structures that promote the scouring process at the base of a turbine. The evolution of scour is directly linked to device operating conditions through the turbine drag force, which is inferred to locally dictate the energy dissipation rate in the scour region. The predictive model is validated using experimental data obtained at the University of Minnesota's St. Anthony Falls Laboratory (SAFL), covering two sediment mobility regimes (clear water and live bed), different turbine designs, hydraulic parameters, grain size distribution and bedform types. The model is applied to a potential prototype scale deployment in the lower Mississippi River, demonstrating its practical relevance and endorsing the feasibility of hydrokinetic energy power plants in large sandy rivers. Multi-turbine deployments are further studied experimentally by monitoring both local and non-local geomorphic effects introduced by a twelve turbine staggered array model installed in a wide channel at SAFL. Local scour behind each turbine is well captured by the theoretical predictive model. However, multi
A Theoretical Model for the Prediction of Siphon Breaking Phenomenon
Energy Technology Data Exchange (ETDEWEB)
Bae, Youngmin; Kim, Young-In; Seo, Jae-Kwang; Kim, Keung Koo; Yoon, Juhyeon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
A siphon phenomenon or siphoning often refers to the movement of liquid from a higher elevation to a lower one through a tube in an inverted U shape (whose top is typically located above the liquid surface) under the action of gravity, and has been used in a variety of reallife applications such as a toilet bowl and a Greedy cup. However, liquid drainage due to siphoning sometimes needs to be prevented. For example, a siphon breaker, which is designed to limit the siphon effect by allowing the gas entrainment into a siphon line, is installed in order to maintain the pool water level above the reactor core when a loss of coolant accident (LOCA) occurs in an open-pool type research reactor. In this paper, we develop a theoretical model to predict the siphon breaking phenomenon. In this paper, a theoretical model to predict the siphon breaking phenomenon is developed. It is shown that the present model predicts well the fundamental features of the siphon breaking phenomenon and undershooting height.
Enhanced regime predictability in atmospheric low-order models due to stochastic forcing.
Kwasniok, Frank
2014-06-28
Regime predictability in atmospheric low-order models augmented with stochastic forcing is studied. Atmospheric regimes are identified as persistent or metastable states using a hidden Markov model analysis. A somewhat counterintuitive, coherence resonance-like effect is observed: regime predictability increases with increasing noise level up to an intermediate optimal value, before decreasing when further increasing the noise level. The enhanced regime predictability is due to increased persistence of the regimes. The effect is found in the Lorenz '63 model and a low-order model of barotropic flow over topography. The increased predictability is only present in the regime dynamics, that is, in a coarse-grained view of the system; predictability of individual trajectories decreases monotonically with increasing noise level. A possible explanation for the phenomenon is given and implications of the finding for weather and climate modelling and prediction are discussed. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics
Directory of Open Access Journals (Sweden)
Cecilia Noecker
2015-03-01
Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral
Regression models for predicting anthropometric measurements of ...
African Journals Online (AJOL)
measure anthropometric dimensions to predict difficult-to-measure dimensions required for ergonomic design of school furniture. A total of 143 students aged between 16 and 18 years from eight public secondary schools in Ogbomoso, Nigeria ...
FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...
African Journals Online (AJOL)
direction (σx) had a maximum value of 375MPa (tensile) and minimum value of ... These results shows that the residual stresses obtained by prediction from the finite element method are in fair agreement with the experimental results.
Regional differences in prediction models of lung function in Germany
Directory of Open Access Journals (Sweden)
Schäper Christoph
2010-04-01
Full Text Available Abstract Background Little is known about the influencing potential of specific characteristics on lung function in different populations. The aim of this analysis was to determine whether lung function determinants differ between subpopulations within Germany and whether prediction equations developed for one subpopulation are also adequate for another subpopulation. Methods Within three studies (KORA C, SHIP-I, ECRHS-I in different areas of Germany 4059 adults performed lung function tests. The available data consisted of forced expiratory volume in one second, forced vital capacity and peak expiratory flow rate. For each study multivariate regression models were developed to predict lung function and Bland-Altman plots were established to evaluate the agreement between predicted and measured values. Results The final regression equations for FEV1 and FVC showed adjusted r-square values between 0.65 and 0.75, and for PEF they were between 0.46 and 0.61. In all studies gender, age, height and pack-years were significant determinants, each with a similar effect size. Regarding other predictors there were some, although not statistically significant, differences between the studies. Bland-Altman plots indicated that the regression models for each individual study adequately predict medium (i.e. normal but not extremely high or low lung function values in the whole study population. Conclusions Simple models with gender, age and height explain a substantial part of lung function variance whereas further determinants add less than 5% to the total explained r-squared, at least for FEV1 and FVC. Thus, for different adult subpopulations of Germany one simple model for each lung function measures is still sufficient.
Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models
Directory of Open Access Journals (Sweden)
Cheng-Hung Hsieh
2007-09-01
Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.
Jack, Brady Michael; Lee, Ling; Yang, Kuay-Keng; Lin, Huann-shyang
2017-10-01
This study showcases the Science for Citizenship Model (SCM) as a new instructional methodology for presenting, to secondary students, science-related technology content related to the use of science in society not taught in the science curriculum, and a new approach for assessing the intercorrelations among three independent variables (benefits, risks, and trust) to predict the dependent variable of triggered interest in learning science. Utilizing a 50-minute instructional presentation on nanotechnology for citizenship, data were collected from 301 Taiwanese high school students. Structural equation modeling (SEM) and paired-samples t-tests were used to analyze the fitness of data to SCM and the extent to which a 50-minute class presentation of nanotechnology for citizenship affected students' awareness of benefits, risks, trust, and triggered interest in learning science. Results of SCM on pre-tests and post-tests revealed acceptable model fit to data and demonstrated that the strongest predictor of students' triggered interest in nanotechnology was their trust in science. Paired-samples t-test results on students' understanding of nanotechnology and their self-evaluated awareness of the benefits and risks of nanotechology, trust in scientists, and interest in learning science revealed low significant differences between pre-test and post-test. These results provide evidence that a short 50-minute presentation on an emerging science not normally addressed within traditional science curriculum had a significant yet limited impact on students' learning of nanotechnology in the classroom. Finally, we suggest why the results of this study may be important to science education instruction and research for understanding how the integration into classroom science education of short presentations of cutting-edge science and emerging technologies in support of the science for citizenship enterprise might be accomplished through future investigations.
Xiong, Qin-xue; Liu, Zhang-yong; Yao, Gui-zhi; Li, Ben-zhou
2010-09-01
Based on the data of field experiments on the hillside croplands of Danjiangkou, Hubei Province of China, the input files of crop characters, management measures, slope gradient and length, and soil properties for running WEPP model (Hillslope version) were established. Combining with the local weather data, a simulation study with the model was made on the runoff and soil loss of the croplands protected by four kinds of hedgerows (Amorpha fruticosa, Lonicera japonica, Hemerocallis fulva, and Poa sphondylodes) in Danjiangkou area. The resulted showed that WEPP model could accurately simulate the anti-erosion effect of hedgerows in hillside farmlands in the study area. Using this model not only reduced test number, but also saved time and effort, being able to provide scientific basis for the popularization and application of hedgerows. Among the four hedgerows, Amorpha fruticosa had the best anti-erosion effect. According to the simulation, the optimal planting density of A. fruticosa hedgerows in the farmlands was 1 m x 15 m at slope gradient 5 degrees, 1 m x 10 m at slope gradient 15 degrees, and 1 m x 3 m at slope gradient 25 degrees.
Predictive models for practical use in aquatic radioecology
International Nuclear Information System (INIS)
Haakanson, L.
1997-01-01
A given fallout of radiocaesium will be distributed and taken up by biota differently in various types of lakes. Thus, lakes have different ''sensitivities'' to radiocesium. Important environmental factors regulating the biouptake are the water retention time and the K-concentration. Several practically useful and ecologically relevant methods exist to remediate lakes, e.g., liming, potash treatment and fertilization of low-productive lakes. The basic aim of this paper (which is a brief version of paper I) is to use the VAMP model, first to illustrate the fact that different lakes have different ''sensitivities'', and then to simulate the effects of alternative remedial methods. The VAMP model has been validated against an extensive set of data from seven European lakes. It has been shown that the VAMP model yields just as good predictions as parallel sets of empirical data, and this is as good as any model can do (II). The main objective of the model is to predict radiocesium in predatory fish (used for human consumption) and in lake water (used for irrigation, drinking water, etc.)
From Predictive Models to Instructional Policies
Rollinson, Joseph; Brunskill, Emma
2015-01-01
At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…
Mathematical modeling and computational prediction of cancer drug resistance.
Sun, Xiaoqiang; Hu, Bin
2017-06-23
Diverse forms of resistance to anticancer drugs can lead to the failure of chemotherapy. Drug resistance is one of the most intractable issues for successfully treating cancer in current clinical practice. Effective clinical approaches that could counter drug resistance by restoring the sensitivity of tumors to the targeted agents are urgently needed. As numerous experimental results on resistance mechanisms have been obtained and a mass of high-throughput data has been accumulated, mathematical modeling and computational predictions using systematic and quantitative approaches have become increasingly important, as they can potentially provide deeper insights into resistance mechanisms, generate novel hypotheses or suggest promising treatment strategies for future testing. In this review, we first briefly summarize the current progress of experimentally revealed resistance mechanisms of targeted therapy, including genetic mechanisms, epigenetic mechanisms, posttranslational mechanisms, cellular mechanisms, microenvironmental mechanisms and pharmacokinetic mechanisms. Subsequently, we list several currently available databases and Web-based tools related to drug sensitivity and resistance. Then, we focus primarily on introducing some state-of-the-art computational methods used in drug resistance studies, including mechanism-based mathematical modeling approaches (e.g. molecular dynamics simulation, kinetic model of molecular networks, ordinary differential equation model of cellular dynamics, stochastic model, partial differential equation model, agent-based model, pharmacokinetic-pharmacodynamic model, etc.) and data-driven prediction methods (e.g. omics data-based conventional screening approach for node biomarkers, static network approach for edge biomarkers and module biomarkers, dynamic network approach for dynamic network biomarkers and dynamic module network biomarkers, etc.). Finally, we discuss several further questions and future directions for the use of
Comparisons of Faulting-Based Pavement Performance Prediction Models
Directory of Open Access Journals (Sweden)
Weina Wang
2017-01-01
Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.
Directory of Open Access Journals (Sweden)
Mihaela Simionescu
2014-12-01
Full Text Available There are many types of econometric models used in predicting the inflation rate, but in this study we used a Bayesian shrinkage combination approach. This methodology is used in order to improve the predictions accuracy by including information that is not captured by the econometric models. Therefore, experts’ forecasts are utilized as prior information, for Romania these predictions being provided by Institute for Economic Forecasting (Dobrescu macromodel, National Commission for Prognosis and European Commission. The empirical results for Romanian inflation show the superiority of a fixed effects model compared to other types of econometric models like VAR, Bayesian VAR, simultaneous equations model, dynamic model, log-linear model. The Bayesian combinations that used experts’ predictions as priors, when the shrinkage parameter tends to infinite, improved the accuracy of all forecasts based on individual models, outperforming also zero and equal weights predictions and naïve forecasts.
Computational models for predicting drug responses in cancer research.
Azuaje, Francisco
2017-09-01
The computational prediction of drug responses based on the analysis of multiple types of genome-wide molecular data is vital for accomplishing the promise of precision medicine in oncology. This will benefit cancer patients by matching their tumor characteristics to the most effective therapy available. As larger and more diverse layers of patient-related data become available, further demands for new bioinformatics approaches and expertise will arise. This article reviews key strategies, resources and techniques for the prediction of drug sensitivity in cell lines and patient-derived samples. It discusses major advances and challenges associated with the different model development steps. This review highlights major trends in this area, and will assist researchers in the assessment of recent progress and in the selection of approaches to emerging applications in oncology. © The Author 2016. Published by Oxford University Press.
Mass-balance model for predicting nitrate in ground water
Frimpter, Michael H.; Donohue, John J.; Rapacz, Michael V.
1990-01-01
A mass-balance accounting model can be used to guide the management of septic systems and fertilizers to control the degradation of ground-water quality in zones of an aquifer that contribute water to public-supply wells. The nitrate concentration of the mixture in the well can be predicted for steady-state conditions by calculating the concentration that results from the total weight of nitrogen and total volume of water entering the zone of contribution to the well. These calculations will allow water-quality managers to predict the nitrate concentrations that would be produced by different types and levels of development, and to plan development accordingly. Computations for different development schemes provide a technical basis for planners and managers to compare water-quality effects and to select alternatives that limit nitrate concentration in wells.
A model to predict the beginning of the pollen season
DEFF Research Database (Denmark)
Toldam-Andersen, Torben Bo
1991-01-01
In order to predict the beginning of the pollen season, a model comprising the Utah phenoclirnatography Chill Unit (CU) and ASYMCUR-Growing Degree Hour (GDH) submodels were used to predict the first bloom in Alms, Ulttirrs and Berirln. The model relates environmental temperatures to rest completion...... and bud development. As phenologic parameter 14 years of pollen counts were used. The observed datcs for the beginning of the pollen seasons were defined from the pollen counts and compared with the model prediction. The CU and GDH submodels were used as: 1. A fixed day model, using only the GDH model...... for fruit trees are generally applicable, and give a reasonable description of the growth processes of other trees. This type of model can therefore be of value in predicting the start of the pollen season. The predicted dates were generally within 3-5 days of the observed. Finally the possibility of frost...
Evaluation of the US Army fallout prediction model
International Nuclear Information System (INIS)
Pernick, A.; Levanon, I.
1987-01-01
The US Army fallout prediction method was evaluated against an advanced fallout prediction model--SIMFIC (Simplified Fallout Interpretive Code). The danger zone areas of the US Army method were found to be significantly greater (up to a factor of 8) than the areas of corresponding radiation hazard as predicted by SIMFIC. Nonetheless, because the US Army's method predicts danger zone lengths that are commonly shorter than the corresponding hot line distances of SIMFIC, the US Army's method is not reliably conservative
International Nuclear Information System (INIS)
Salazar, Ramon B.; Appenzeller, Joerg; Ilatikhameneh, Hesameddin; Rahman, Rajib; Klimeck, Gerhard
2015-01-01
A new compact modeling approach is presented which describes the full current-voltage (I-V) characteristic of high-performance (aggressively scaled-down) tunneling field-effect-transistors (TFETs) based on homojunction direct-bandgap semiconductors. The model is based on an analytic description of two key features, which capture the main physical phenomena related to TFETs: (1) the potential profile from source to channel and (2) the elliptic curvature of the complex bands in the bandgap region. It is proposed to use 1D Poisson's equations in the source and the channel to describe the potential profile in homojunction TFETs. This allows to quantify the impact of source/drain doping on device performance, an aspect usually ignored in TFET modeling but highly relevant in ultra-scaled devices. The compact model is validated by comparison with state-of-the-art quantum transport simulations using a 3D full band atomistic approach based on non-equilibrium Green's functions. It is shown that the model reproduces with good accuracy the data obtained from the simulations in all regions of operation: the on/off states and the n/p branches of conduction. This approach allows calculation of energy-dependent band-to-band tunneling currents in TFETs, a feature that allows gaining deep insights into the underlying device physics. The simplicity and accuracy of the approach provide a powerful tool to explore in a quantitatively manner how a wide variety of parameters (material-, size-, and/or geometry-dependent) impact the TFET performance under any bias conditions. The proposed model presents thus a practical complement to computationally expensive simulations such as the 3D NEGF approach
Comparative Evaluation of Some Crop Yield Prediction Models ...
African Journals Online (AJOL)
A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...
Comparative Evaluation of Some Crop Yield Prediction Models ...
African Journals Online (AJOL)
(1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of cowpea yield-water use and weather data were collected.
Prediction methods environmental-effect reporting
International Nuclear Information System (INIS)
Jonker, R.J.; Koester, H.W.
1987-12-01
This report provides a survey of prediction methods which can be applied to the calculation of emissions in cuclear-reactor accidents, in the framework of environment-effect reports (dutch m.e.r.) or risk analyses. Also emissions during normal operation are important for m.e.r.. These can be derived from measured emissions of power plants being in operation. Data concerning the latter are reported. The report consists of an introduction into reactor technology, among which a description of some reactor types, the corresponding fuel cycle and dismantling scenarios - a discussion of risk-analyses for nuclear power plants and the physical processes which can play a role during accidents - a discussion of prediction methods to be employed and the expected developments in this area - some background information. (aughor). 145 refs.; 21 figs.; 20 tabs
Prediction of speech intelligibility based on an auditory preprocessing model
DEFF Research Database (Denmark)
Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten
2010-01-01
Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...
Modelling microbial interactions and food structure in predictive microbiology
Malakar, P.K.
2002-01-01
Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.
Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of
Ocean wave prediction using numerical and neural network models
Digital Repository Service at National Institute of Oceanography (India)
Mandal, S.; Prabaharan, N.
This paper presents an overview of the development of the numerical wave prediction models and recently used neural networks for ocean wave hindcasting and forecasting. The numerical wave models express the physical concepts of the phenomena...
A Prediction Model of the Capillary Pressure J-Function.
Directory of Open Access Journals (Sweden)
W S Xu
Full Text Available The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative.
Modeling and Model Predictive Power and Rate Control of Wireless Communication Networks
Directory of Open Access Journals (Sweden)
Cunwu Han
2014-01-01
Full Text Available A novel power and rate control system model for wireless communication networks is presented, which includes uncertainties, input constraints, and time-varying delays in both state and control input. A robust delay-dependent model predictive power and rate control method is proposed, and the state feedback control law is obtained by solving an optimization problem that is derived by using linear matrix inequality (LMI techniques. Simulation results are given to illustrate the effectiveness of the proposed method.
Predictive Models of Cognitive Outcomes of Developmental Insults
Chan, Yupo; Bouaynaya, Nidhal; Chowdhury, Parimal; Leszczynska, Danuta; Patterson, Tucker A.; Tarasenko, Olga
2010-04-01
Representatives of Arkansas medical, research and educational institutions have gathered over the past four years to discuss the relationship between functional developmental perturbations and their neurological consequences. We wish to track the effect on the nervous system by developmental perturbations over time and across species. Except for perturbations, the sequence of events that occur during neural development was found to be remarkably conserved across mammalian species. The tracking includes consequences on anatomical regions and behavioral changes. The ultimate goal is to develop a predictive model of long-term genotypic and phenotypic outcomes that includes developmental insults. Such a model can subsequently be fostered into an educated intervention for therapeutic purposes. Several datasets were identified to test plausible hypotheses, ranging from evoked potential datasets to sleep-disorder datasets. An initial model may be mathematical and conceptual. However, we expect to see rapid progress as large-scale gene expression studies in the mammalian brain permit genome-wide searches to discover genes that are uniquely expressed in brain circuits and regions. These genes ultimately control behavior. By using a validated model we endeavor to make useful predictions.
Hologram QSAR model for the prediction of human oral bioavailability.
Moda, Tiago L; Montanari, Carlos A; Andricopulo, Adriano D
2007-12-15
A drug intended for use in humans should have an ideal balance of pharmacokinetics and safety, as well as potency and selectivity. Unfavorable pharmacokinetics can negatively affect the clinical development of many otherwise promising drug candidates. A variety of in silico ADME (absorption, distribution, metabolism, and excretion) models are receiving increased attention due to a better appreciation that pharmacokinetic properties should be considered in early phases of the drug discovery process. Human oral bioavailability is an important pharmacokinetic property, which is directly related to the amount of drug available in the systemic circulation to exert pharmacological and therapeutic effects. In the present work, hologram quantitative structure-activity relationships (HQSAR) were performed on a training set of 250 structurally diverse molecules with known human oral bioavailability. The most significant HQSAR model (q(2)=0.70, r(2)=0.93) was obtained using atoms, bond, connection, and chirality as fragment distinction. The predictive ability of the model was evaluated by an external test set containing 52 molecules not included in the training set, and the predicted values were in good agreement with the experimental values. The HQSAR model should be useful for the design of new drug candidates having increased bioavailability as well as in the process of chemical library design, virtual screening, and high-throughput screening.
Predictive modeling of gingivitis severity and susceptibility via oral microbiota.
Huang, Shi; Li, Rui; Zeng, Xiaowei; He, Tao; Zhao, Helen; Chang, Alice; Bo, Cunpei; Chen, Jie; Yang, Fang; Knight, Rob; Liu, Jiquan; Davis, Catherine; Xu, Jian
2014-09-01
Predictive modeling of human disease based on the microbiota holds great potential yet remains challenging. Here, 50 adults underwent controlled transitions from naturally occurring gingivitis, to healthy gingivae (baseline), and to experimental gingivitis (EG). In diseased plaque microbiota, 27 bacterial genera changed in relative abundance and functional genes including 33 flagellar biosynthesis-related groups were enriched. Plaque microbiota structure exhibited a continuous gradient along the first principal component, reflecting transition from healthy to diseased states, which correlated with Mazza Gingival Index. We identified two host types with distinct gingivitis sensitivity. Our proposed microbial indices of gingivitis classified host types with 74% reliability, and, when tested on another 41-member cohort, distinguished healthy from diseased individuals with 95% accuracy. Furthermore, the state of the microbiota in naturally occurring gingivitis predicted the microbiota state and severity of subsequent EG (but not the state of the microbiota during the healthy baseline period). Because the effect of disease is greater than interpersonal variation in plaque, in contrast to the gut, plaque microbiota may provide advantages in predictive modeling of oral diseases.
Statistical model based gender prediction for targeted NGS clinical panels
Directory of Open Access Journals (Sweden)
Palani Kannan Kandavel
2017-12-01
The reference test dataset are being used to test the model. The sensitivity on predicting the gender has been increased from the current “genotype composition in ChrX” based approach. In addition, the prediction score given by the model can be used to evaluate the quality of clinical dataset. The higher prediction score towards its respective gender indicates the higher quality of sequenced data.
comparative analysis of two mathematical models for prediction
African Journals Online (AJOL)
Abstract. A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data ob- tained from experimental work done in this study. The models used are Scheffes and Osadebes optimization theories to predict the compressive strength of ...
Comparison of predictive models for the early diagnosis of diabetes
M. Jahani (Meysam); M. Mahdavi (Mahdi)
2016-01-01
textabstractObjectives: This study develops neural network models to improve the prediction of diabetes using clinical and lifestyle characteristics. Prediction models were developed using a combination of approaches and concepts. Methods: We used memetic algorithms to update weights and to improve
Testing and analysis of internal hardwood log defect prediction models
R. Edward. Thomas
2011-01-01
The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...
Hidden Markov Model for quantitative prediction of snowfall
Indian Academy of Sciences (India)
A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...
Bayesian variable order Markov models: Towards Bayesian predictive state representations
Dimitrakakis, C.
2009-01-01
We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of
Refining the committee approach and uncertainty prediction in hydrological modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of
Wind turbine control and model predictive control for uncertain systems
DEFF Research Database (Denmark)
Thomsen, Sven Creutz
as disturbance models for controller design. The theoretical study deals with Model Predictive Control (MPC). MPC is an optimal control method which is characterized by the use of a receding prediction horizon. MPC has risen in popularity due to its inherent ability to systematically account for time...
Hidden Markov Model for quantitative prediction of snowfall and ...
Indian Academy of Sciences (India)
A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...
Model predictive control of a 3-DOF helicopter system using ...
African Journals Online (AJOL)
... by simulation, and its performance is compared with that achieved by linear model predictive control (LMPC). Keywords: nonlinear systems, helicopter dynamics, MIMO systems, model predictive control, successive linearization. International Journal of Engineering, Science and Technology, Vol. 2, No. 10, 2010, pp. 9-19 ...
Comparative Analysis of Two Mathematical Models for Prediction of ...
African Journals Online (AJOL)
A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data obtained from experimental work done in this study. The models used are Scheffe's and Osadebe's optimization theories to predict the compressive strength of sandcrete ...
A mathematical model for predicting earthquake occurrence ...
African Journals Online (AJOL)
We consider the continental crust under damage. We use the observed results of microseism in many seismic stations of the world which was established to study the time series of the activities of the continental crust with a view to predicting possible time of occurrence of earthquake. We consider microseism time series ...
Model for predicting the injury severity score.
Hagiwara, Shuichi; Oshima, Kiyohiro; Murata, Masato; Kaneko, Minoru; Aoki, Makoto; Kanbe, Masahiko; Nakamura, Takuro; Ohyama, Yoshio; Tamura, Jun'ichi
2015-07-01
To determine the formula that predicts the injury severity score from parameters that are obtained in the emergency department at arrival. We reviewed the medical records of trauma patients who were transferred to the emergency department of Gunma University Hospital between January 2010 and December 2010. The injury severity score, age, mean blood pressure, heart rate, Glasgow coma scale, hemoglobin, hematocrit, red blood cell count, platelet count, fibrinogen, international normalized ratio of prothrombin time, activated partial thromboplastin time, and fibrin degradation products, were examined in those patients on arrival. To determine the formula that predicts the injury severity score, multiple linear regression analysis was carried out. The injury severity score was set as the dependent variable, and the other parameters were set as candidate objective variables. IBM spss Statistics 20 was used for the statistical analysis. Statistical significance was set at P Watson ratio was 2.200. A formula for predicting the injury severity score in trauma patients was developed with ordinary parameters such as fibrin degradation products and mean blood pressure. This formula is useful because we can predict the injury severity score easily in the emergency department.
Magnuson, Brian
A proof-of-concept software-in-the-loop study is performed to assess the accuracy of predicted net and charge-gaining energy consumption for potential effective use in optimizing powertrain management of hybrid vehicles. With promising results of improving fuel efficiency of a thermostatic control strategy for a series, plug-ing, hybrid-electric vehicle by 8.24%, the route and speed prediction machine learning algorithms are redesigned and implemented for real- world testing in a stand-alone C++ code-base to ingest map data, learn and predict driver habits, and store driver data for fast startup and shutdown of the controller or computer used to execute the compiled algorithm. Speed prediction is performed using a multi-layer, multi-input, multi- output neural network using feed-forward prediction and gradient descent through back- propagation training. Route prediction utilizes a Hidden Markov Model with a recurrent forward algorithm for prediction and multi-dimensional hash maps to store state and state distribution constraining associations between atomic road segments and end destinations. Predicted energy is calculated using the predicted time-series speed and elevation profile over the predicted route and the road-load equation. Testing of the code-base is performed over a known road network spanning 24x35 blocks on the south hill of Spokane, Washington. A large set of training routes are traversed once to add randomness to the route prediction algorithm, and a subset of the training routes, testing routes, are traversed to assess the accuracy of the net and charge-gaining predicted energy consumption. Each test route is traveled a random number of times with varying speed conditions from traffic and pedestrians to add randomness to speed prediction. Prediction data is stored and analyzed in a post process Matlab script. The aggregated results and analysis of all traversals of all test routes reflect the performance of the Driver Prediction algorithm. The
Directory of Open Access Journals (Sweden)
Genoveva Rodríguez-Castañeda
Full Text Available Species distribution modeling (SDM is an increasingly important tool to predict the geographic distribution of species. Even though many problems associated with this method have been highlighted and solutions have been proposed, little has been done to increase comparability among studies. We reviewed recent publications applying SDMs and found that seventy nine percent failed to report methods that ensure comparability among studies, such as disclosing the maximum probability range produced by the models and reporting on the number of species occurrences used. We modeled six species of Falco from northern Europe and demonstrate that model results are altered by (1 spatial bias in species' occurrence data, (2 differences in the geographic extent of the environmental data, and (3 the effects of transformation of model output to presence/absence data when applying thresholds. Depending on the modeling decisions, forecasts of the future geographic distribution of Falco ranged from range contraction in 80% of the species to no net loss in any species, with the best model predicting no net loss of habitat in Northern Europe. The fact that predictions of range changes in response to climate change in published studies may be influenced by decisions in the modeling process seriously hampers the possibility of making sound management recommendations. Thus, each of the decisions made in generating SDMs should be reported and evaluated to ensure conclusions and policies are based on the biology and ecology of the species being modeled.
Predictive Multiscale Modeling of Nanocellulose Based Materials and Systems
International Nuclear Information System (INIS)
Kovalenko, Andriy
2014-01-01
Cellulose Nanocrysals (CNC) is a renewable biodegradable biopolymer with outstanding mechanical properties made from highly abundant natural source, and therefore is very attractive as reinforcing additive to replace petroleum-based plastics in biocomposite materials, foams, and gels. Large-scale applications of CNC are currently limited due to its low solubility in non-polar organic solvents used in existing polymerization technologies. The solvation properties of CNC can be improved by chemical modification of its surface. Development of effective surface modifications has been rather slow because extensive chemical modifications destabilize the hydrogen bonding network of cellulose and deteriorate the mechanical properties of CNC. We employ predictive multiscale theory, modeling, and simulation to gain a fundamental insight into the effect of CNC surface modifications on hydrogen bonding, CNC crystallinity, solvation thermodynamics, and CNC compatibilization with the existing polymerization technologies, so as to rationally design green nanomaterials with improved solubility in non-polar solvents, controlled liquid crystal ordering and optimized extrusion properties. An essential part of this multiscale modeling approach is the statistical- mechanical 3D-RISM-KH molecular theory of solvation, coupled with quantum mechanics, molecular mechanics, and multistep molecular dynamics simulation. The 3D-RISM-KH theory provides predictive modeling of both polar and non-polar solvents, solvent mixtures, and electrolyte solutions in a wide range of concentrations and thermodynamic states. It properly accounts for effective interactions in solution such as steric effects, hydrophobicity and hydrophilicity, hydrogen bonding, salt bridges, buffer, co-solvent, and successfully predicts solvation effects and processes in bulk liquids, solvation layers at solid surface, and in pockets and other inner spaces of macromolecules and supramolecular assemblies. This methodology
Predictive Multiscale Modeling of Nanocellulose Based Materials and Systems
Kovalenko, Andriy
2014-08-01
Cellulose Nanocrysals (CNC) is a renewable biodegradable biopolymer with outstanding mechanical properties made from highly abundant natural source, and therefore is very attractive as reinforcing additive to replace petroleum-based plastics in biocomposite materials, foams, and gels. Large-scale applications of CNC are currently limited due to its low solubility in non-polar organic solvents used in existing polymerization technologies. The solvation properties of CNC can be improved by chemical modification of its surface. Development of effective surface modifications has been rather slow because extensive chemical modifications destabilize the hydrogen bonding network of cellulose and deteriorate the mechanical properties of CNC. We employ predictive multiscale theory, modeling, and simulation to gain a fundamental insight into the effect of CNC surface modifications on hydrogen bonding, CNC crystallinity, solvation thermodynamics, and CNC compatibilization with the existing polymerization technologies, so as to rationally design green nanomaterials with improved solubility in non-polar solvents, controlled liquid crystal ordering and optimized extrusion properties. An essential part of this multiscale modeling approach is the statistical- mechanical 3D-RISM-KH molecular theory of solvation, coupled with quantum mechanics, molecular mechanics, and multistep molecular dynamics simulation. The 3D-RISM-KH theory provides predictive modeling of both polar and non-polar solvents, solvent mixtures, and electrolyte solutions in a wide range of concentrations and thermodynamic states. It properly accounts for effective interactions in solution such as steric effects, hydrophobicity and hydrophilicity, hydrogen bonding, salt bridges, buffer, co-solvent, and successfully predicts solvation effects and processes in bulk liquids, solvation layers at solid surface, and in pockets and other inner spaces of macromolecules and supramolecular assemblies. This methodology
Econometric models for predicting confusion crop ratios
Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)
1979-01-01
Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.
Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki
2012-01-01
The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.
PEEX Modelling Platform for Seamless Environmental Prediction
Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku
2017-04-01
The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.
Models Predicting Success of Infertility Treatment: A Systematic Review
Zarinara, Alireza; Zeraati, Hojjat; Kamali, Koorosh; Mohammad, Kazem; Shahnazari, Parisa; Akhondi, Mohammad Mehdi
2016-01-01
Background: Infertile couples are faced with problems that affect their marital life. Infertility treatment is expensive and time consuming and occasionally isn’t simply possible. Prediction models for infertility treatment have been proposed and prediction of treatment success is a new field in infertility treatment. Because prediction of treatment success is a new need for infertile couples, this paper reviewed previous studies for catching a general concept in applicability of the models. Methods: This study was conducted as a systematic review at Avicenna Research Institute in 2015. Six data bases were searched based on WHO definitions and MESH key words. Papers about prediction models in infertility were evaluated. Results: Eighty one papers were eligible for the study. Papers covered years after 1986 and studies were designed retrospectively and prospectively. IVF prediction models have more shares in papers. Most common predictors were age, duration of infertility, ovarian and tubal problems. Conclusion: Predictio