Globally disruptive events show predictable timing patterns
Gillman, Michael P.; Erenler, Hilary E.
2017-01-01
Globally disruptive events include asteroid/comet impacts, large igneous provinces and glaciations, all of which have been considered as contributors to mass extinctions. Understanding the overall relationship between the timings of the largest extinctions and their potential proximal causes remains one of science's great unsolved mysteries. Cycles of about 60 Myr in both fossil diversity and environmental data suggest external drivers such as the passage of the Solar System through the galactic plane. While cyclic phenomena are recognized statistically, a lack of coherent mechanisms and a failure to link key events has hampered wider acceptance of multi-million year periodicity and its relevance to earth science and evolution. The generation of a robust predictive model of timings, with a clear plausible primary mechanism, would signal a paradigm shift. Here, we present a model of the timings of globally disruptive events and a possible explanation of their ultimate cause. The proposed model is a symmetrical pattern of 63 Myr sequences around a central value, interpreted as the occurrence of events along, and parallel to, the galactic midplane. The symmetry is consistent with multiple dark matter disks, aligned parallel to the midplane. One implication of the precise pattern of timings and the underlying physical model is the ability to predict future events, such as a major extinction in 1-2 Myr.
Daniel H Fine
Full Text Available Improved diagnostics remains a fundamental goal of biomedical research. This study was designed to assess cytokine biomarkers that could predict bone loss (BL in localized aggressive periodontitis. 2,058 adolescents were screened. Two groups of 50 periodontally healthy adolescents were enrolled in the longitudinal study. One group had Aggregatibacter actinomycetemcomitans (Aa, the putative pathogen, while the matched cohort did not. Cytokine levels were assessed in saliva and gingival crevicular fluid (GCF. Participants were sampled, examined, and radiographed every 6 months for 2-3 years. Disease was defined as radiographic evidence of BL. Saliva and GCF was collected at each visit, frozen, and then tested retrospectively after detection of BL. Sixteen subjects with Aa developed BL. Saliva from Aa-positive and Aa-negative healthy subjects was compared to subjects who developed BL. GCF was collected from 16 subjects with BL and from another 38 subjects who remained healthy. GCF from BL sites in the 16 subjects was compared to healthy sites in these same subjects and to healthy sites in subjects who remained healthy. Results showed that cytokines in saliva associated with acute inflammation were elevated in subjects who developed BL (i.e., MIP-1α MIP-1β IL-α, IL-1β and IL-8; p<0.01. MIP-1α was elevated 13-fold, 6 months prior to BL. When MIP-1α levels were set at 40 pg/ml, 98% of healthy sites were below that level (Specificity; whereas, 93% of sites with BL were higher (Sensitivity, with comparable Predictive Values of 98%; p<0.0001; 95% C.I. = 42.5-52.7. MIP-1α consistently showed elevated levels as a biomarker for BL in both saliva and GCF, 6 months prior to BL. MIP-1α continues to demonstrate its strong candidacy as a diagnostic biomarker for both subject and site vulnerability to BL.
William S. Currie; Mark E. Harmon; Ingrid C. Burke; Stephen C. Hart; William J. Parton; Whendee L. Silver
2009-01-01
We analyzed results from 10-year long field incubations of foliar and fine root litter from the Long-term lntersite Decomposition Experiment Team (LIDET) study. We tested whether a variety of climate and litter quality variables could be used to develop regression models of decomposition parameters across wide ranges in litter quality and climate and whether these...
Ted W Simon
Full Text Available A stochastic model of nuclear receptor-mediated transcription was developed based on activation of the aryl hydrocarbon receptor (AHR by 2,3,7,8-tetrachlorodibenzodioxin (TCDD and subsequent binding the activated AHR to xenobiotic response elements (XREs on DNA. The model was based on effects observed in cells lines commonly used as in vitro experimental systems. Following ligand binding, the AHR moves into the cell nucleus and forms a heterodimer with the aryl hydrocarbon nuclear translocator (ARNT. In the model, a requirement for binding to DNA is that a generic coregulatory protein is subsequently bound to the AHR-ARNT dimer. Varying the amount of coregulator available within the nucleus altered both the potency and efficacy of TCDD for inducing for transcription of CYP1A1 mRNA, a commonly used marker for activation of the AHR. Lowering the amount of available cofactor slightly increased the EC50 for the transcriptional response without changing the efficacy or maximal response. Further reduction in the amount of cofactor reduced the efficacy and produced non-monotonic dose-response curves (NMDRCs at higher ligand concentrations. The shapes of these NMDRCs were reminiscent of the phenomenon of squelching. Resource limitations for transcriptional machinery are becoming apparent in eukaryotic cells. Within single cells, nuclear receptor-mediated gene expression appears to be a stochastic process; however, intercellular communication and other aspects of tissue coordination may represent a compensatory process to maintain an organism's ability to respond on a phenotypic level to various stimuli within an inconstant environment.
Levy, R.; Mcginness, H.
1976-01-01
Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.
Cestari, Andrea
2013-01-01
Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.
Melanoma risk prediction models
Nikolić Jelena
2014-01-01
Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were
MODEL PREDICTIVE CONTROL FUNDAMENTALS
2012-07-02
Jul 2, 2012 ... paper, we will present an introduction to the theory and application of MPC with Matlab codes written to ... model predictive control, linear systems, discrete-time systems, ... and then compute very rapidly for this open-loop con-.
Showing that the race model inequality is not violated
Gondan, Matthias; Riehl, Verena; Blurton, Steven Paul
2012-01-01
important being race models and coactivation models. Redundancy gains consistent with the race model have an upper limit, however, which is given by the well-known race model inequality (Miller, 1982). A number of statistical tests have been proposed for testing the race model inequality in single...... participants and groups of participants. All of these tests use the race model as the null hypothesis, and rejection of the null hypothesis is considered evidence in favor of coactivation. We introduce a statistical test in which the race model prediction is the alternative hypothesis. This test controls...... the Type I error if a theory predicts that the race model prediction holds in a given experimental condition. © 2011 Psychonomic Society, Inc....
Showing that the race model inequality is not violated
Gondan, Matthias; Riehl, Verena; Blurton, Steven Paul
2012-01-01
important being race models and coactivation models. Redundancy gains consistent with the race model have an upper limit, however, which is given by the well-known race model inequality (Miller, 1982). A number of statistical tests have been proposed for testing the race model inequality in single...... participants and groups of participants. All of these tests use the race model as the null hypothesis, and rejection of the null hypothesis is considered evidence in favor of coactivation. We introduce a statistical test in which the race model prediction is the alternative hypothesis. This test controls...... the Type I error if a theory predicts that the race model prediction holds in a given experimental condition. © 2011 Psychonomic Society, Inc....
Nominal model predictive control
Grüne, Lars
2013-01-01
5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...
Nominal Model Predictive Control
Grüne, Lars
2014-01-01
5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...
Candidate Prediction Models and Methods
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik
2005-01-01
This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....
Showing that the race model inequality is not violated
Gondan, Matthias; Riehl, Verena; Blurton, Steven Paul
2012-01-01
important being race models and coactivation models. Redundancy gains consistent with the race model have an upper limit, however, which is given by the well-known race model inequality (Miller, 1982). A number of statistical tests have been proposed for testing the race model inequality in single...... participants and groups of participants. All of these tests use the race model as the null hypothesis, and rejection of the null hypothesis is considered evidence in favor of coactivation. We introduce a statistical test in which the race model prediction is the alternative hypothesis. This test controls...
Predictive Surface Complexation Modeling
Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences
2016-11-29
Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO_{2} and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.
Candidate Prediction Models and Methods
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik
2005-01-01
This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...
ShowFlow: A practical interface for groundwater modeling
Tauxe, J.D.
1990-12-01
ShowFlow was created to provide a user-friendly, intuitive environment for researchers and students who use computer modeling software. What traditionally has been a workplace available only to those familiar with command-line based computer systems is now within reach of almost anyone interested in the subject of modeling. In the case of this edition of ShowFlow, the user can easily experiment with simulations using the steady state gaussian plume groundwater pollutant transport model SSGPLUME, though ShowFlow can be rewritten to provide a similar interface for any computer model. Included in this thesis is all the source code for both the ShowFlow application for Microsoft{reg sign} Windows{trademark} and the SSGPLUME model, a User's Guide, and a Developer's Guide for converting ShowFlow to run other model programs. 18 refs., 13 figs.
Giovanni Rapacciuolo
Full Text Available Conservation planners often wish to predict how species distributions will change in response to environmental changes. Species distribution models (SDMs are the primary tool for making such predictions. Many methods are widely used; however, they all make simplifying assumptions, and predictions can therefore be subject to high uncertainty. With global change well underway, field records of observed range shifts are increasingly being used for testing SDM transferability. We used an unprecedented distribution dataset documenting recent range changes of British vascular plants, birds, and butterflies to test whether correlative SDMs based on climate change provide useful approximations of potential distribution shifts. We modelled past species distributions from climate using nine single techniques and a consensus approach, and projected the geographical extent of these models to a more recent time period based on climate change; we then compared model predictions with recent observed distributions in order to estimate the temporal transferability and prediction accuracy of our models. We also evaluated the relative effect of methodological and taxonomic variation on the performance of SDMs. Models showed good transferability in time when assessed using widespread metrics of accuracy. However, models had low accuracy to predict where occupancy status changed between time periods, especially for declining species. Model performance varied greatly among species within major taxa, but there was also considerable variation among modelling frameworks. Past climatic associations of British species distributions retain a high explanatory power when transferred to recent time--due to their accuracy to predict large areas retained by species--but fail to capture relevant predictors of change. We strongly emphasize the need for caution when using SDMs to predict shifts in species distributions: high explanatory power on temporally-independent records
Reciprocal Ontological Models Show Indeterminism Comparable to Quantum Theory
Bandyopadhyay, Somshubhro; Banik, Manik; Bhattacharya, Some Sankar; Ghosh, Sibasish; Kar, Guruprasad; Mukherjee, Amit; Roy, Arup
2016-12-01
We show that within the class of ontological models due to Harrigan and Spekkens, those satisfying preparation-measurement reciprocity must allow indeterminism comparable to that in quantum theory. Our result implies that one can design quantum random number generator, for which it is impossible, even in principle, to construct a reciprocal deterministic model.
Reciprocal Ontological Models Show Indeterminism Comparable to Quantum Theory
Bandyopadhyay, Somshubhro; Banik, Manik; Bhattacharya, Some Sankar; Ghosh, Sibasish; Kar, Guruprasad; Mukherjee, Amit; Roy, Arup
2017-02-01
We show that within the class of ontological models due to Harrigan and Spekkens, those satisfying preparation-measurement reciprocity must allow indeterminism comparable to that in quantum theory. Our result implies that one can design quantum random number generator, for which it is impossible, even in principle, to construct a reciprocal deterministic model.
Alemu, Setegn W; Calus, Mario P L; Muir, William M; Peeters, Katrijn; Vereijken, Addie; Bijma, Piter
2016-09-13
Mortality due to cannibalism causes both economic and welfare problems in laying hens. To limit mortality due to cannibalism, laying hens are often beak-trimmed, which is undesirable for animal welfare reasons. Genetic selection is an alternative strategy to increase survival and is more efficient by taking heritable variation that originates from social interactions into account, which are modelled as the so-called indirect genetic effects (IGE). Despite the considerable heritable variation in survival time due to IGE, genetic improvement of survival time in laying hens is still challenging because the detected heritable variation of the trait with IGE is still limited, ranging from 0.06 to 0.26, and individuals that are still alive at the end of the recording period are censored. Furthermore, survival time records are available late in life and only on females. To cope with these challenges, we tested the hypothesis that genomic prediction increases the accuracy of estimated breeding values (EBV) compared to parental average EBV, and increases response to selection for survival time compared to a traditional breeding scheme. We tested this hypothesis in two lines of brown layers with intact beaks, which show cannibalism, and also the hypothesis that the rate of inbreeding per year is lower for genomic selection than for the traditional breeding scheme. The standard deviation of genomic prediction EBV for survival time was around 22 days for both lines, indicating good prospects for selection against mortality in laying hens with intact beaks. Genomic prediction increased the accuracy of the EBV by 35 and 32 % compared to the parent average EBV for the two lines. At the current reference population size, predicted response to selection was 91 % higher when using genomic selection than with the traditional breeding scheme, as a result of a shorter generation interval in males and greater accuracy of selection in females. The predicted rate of inbreeding per
Nonlinear chaotic model for predicting storm surges
M. Siek
2010-09-01
Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.
Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy
2008-01-01
Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...
Numerical weather prediction model tuning via ensemble prediction system
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
Zephyr - the prediction models
Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg
2001-01-01
This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Dani...
A Solved Model to Show Insufficiency of Quantitative Adiabatic Condition
LIU Long-Jiang; LIU Yu-Zhen; TONG Dian-Min
2009-01-01
The adiabatic theorem is a useful tool in processing quantum systems slowly evolving,but its practical application depends on the quantitative condition expressed by Hamiltonian's eigenvalues and eigenstates,which is usually taken as a sufficient condition.Recently,the sumciency of the condition was questioned,and several counterex amples have been reported.Here we present a new solved model to show the insufficiency of the traditional quantitative adiabatic condition.
Confidence scores for prediction models
Gerds, Thomas Alexander; van de Wiel, MA
2011-01-01
modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...
Modelling, controlling, predicting blackouts
Wang, Chengwei; Baptista, Murilo S
2016-01-01
The electric power system is one of the cornerstones of modern society. One of its most serious malfunctions is the blackout, a catastrophic event that may disrupt a substantial portion of the system, playing havoc to human life and causing great economic losses. Thus, understanding the mechanisms leading to blackouts and creating a reliable and resilient power grid has been a major issue, attracting the attention of scientists, engineers and stakeholders. In this paper, we study the blackout problem in power grids by considering a practical phase-oscillator model. This model allows one to simultaneously consider different types of power sources (e.g., traditional AC power plants and renewable power sources connected by DC/AC inverters) and different types of loads (e.g., consumers connected to distribution networks and consumers directly connected to power plants). We propose two new control strategies based on our model, one for traditional power grids, and another one for smart grids. The control strategie...
Melanoma Risk Prediction Models
Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Prediction models in complex terrain
Marti, I.; Nielsen, Torben Skov; Madsen, Henrik
2001-01-01
are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production......The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... and HIRLAM predictions. The statistical models belong to the class of conditional parametric models. The models are estimated using local polynomial regression, but the estimation method is here extended to be adaptive in order to allow for slow changes in the system e.g. caused by the annual variations...
Prediction models in complex terrain
Marti, I.; Nielsen, Torben Skov; Madsen, Henrik
2001-01-01
The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...
Corporate prediction models, ratios or regression analysis?
Bijnen, E.J.; Wijn, M.F.C.M.
1994-01-01
The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in
Predictive models of forest dynamics.
Purves, Drew; Pacala, Stephen
2008-06-13
Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.
Equivalency and unbiasedness of grey prediction models
Bo Zeng; Chuan Li; Guo Chen; Xianjun Long
2015-01-01
In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction mo-dels, the equivalence and unbiasedness of grey prediction mo-dels are analyzed and verified. The results show that al the grey prediction models that are strictly derived from x(0)(k) +az(1)(k) = b have the identical model structure and simulation precision. Moreover, the unbiased simulation for the homoge-neous exponential sequence can be accomplished. However, the models derived from dx(1)/dt+ax(1) =b are only close to those derived from x(0)(k)+az(1)(k)=b provided that|a|has to satisfy|a| < 0.1; neither could the unbiased simulation for the homoge-neous exponential sequence be achieved. The above conclusions are proved and verified through some theorems and examples.
Showing Automatically Generated Students' Conceptual Models to Students and Teachers
Perez-Marin, Diana; Pascual-Nieto, Ismael
2010-01-01
A student conceptual model can be defined as a set of interconnected concepts associated with an estimation value that indicates how well these concepts are used by the students. It can model just one student or a group of students, and can be represented as a concept map, conceptual diagram or one of several other knowledge representation…
Web of Science: showing a bug today that can mislead scientific research output's prediction
Batista, Pablo Diniz; Fauth, Leduc Hermeto de Almeida; Brandão, Marcia de Oliveira Reis
2016-01-01
As it happened in all domains of human activities, economic issues and the increase of people working in scientific research have altered the way scientific production is evaluated so as the objectives of performing the evaluation. Introduced in 2005 by J. E. Hirsch as an indicator able to measure individual scientific output not only in terms of quantity, but also in terms of quality, h index has spread throughout the world. In 2007, Hirsch proposed its adoption also as the best to predict future scientific achievement and, consequently, a useful guide for investments in research and for institutions when hiring members for their scientific staff. Since then, several authors have also been using the Thomson ISI Web of Science database to develop their proposals for evaluating research output. Here we show that a subtle flaw in Web of Science can inflate the results of info collected, therefore compromising the exactness and, consequently, the effectiveness of Hirsch's proposal and its variations
Modelling Chemical Reasoning to Predict Reactions
Segler, Marwin H S
2016-01-01
The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180,000 randomly selected binary reactions. We show that our data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-) discovering novel transformations (even including transition-metal catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph, and because each single reaction prediction is typically ac...
Managing waiting times to predict no-shows and cancelations at a children’s hospital
Miguel Rodríguez-García
2016-12-01
Full Text Available Purpose: Since long waits in hospitals have been found to be related to high rates of no-shows and cancelations, managing waiting times should be considered as an important tool that hospitals can use to reduce missed appointments. The aim of this study is to analyze patients’ behavior in order to predict no-show and cancelation rates correlated to waiting times. Design/methodology/approach: This study is based on the data from a US children’s hospital, which includes all the appointments registered during one year of observation. We used the call-appointment interval to establish the wait time to get an appointment. Four different types of appointment-keeping behavior and two types of patients were distinguished: arrival, no-show, cancelation with no reschedule, and cancelation with reschedule; and new and established patients. Findings: Results confirmed a strong impact of long waiting times on patients’ appointment-keeping behavior, and the logarithmic regression was found as the best-fit function for the correlation between variables in all cases. The correlation analysis showed that new patients tend to miss appointments more often than established patients when the waiting time increases. It was also found that, depending on the patients’ appointment distribution, it might get more complicated for hospitals to reduce missed appointments as the waiting time is reduced. Originality/value: The methodology applied in our study, which combines the use of regression analysis and patients’ appointment distribution analysis, would help health care managers to understand the initial implications of long waiting times and to address improvement related to patient satisfaction and hospital performance.
Global Solar Dynamo Models: Simulations and Predictions
Mausumi Dikpati; Peter A. Gilman
2008-03-01
Flux-transport type solar dynamos have achieved considerable success in correctly simulating many solar cycle features, and are now being used for prediction of solar cycle timing and amplitude.We first define flux-transport dynamos and demonstrate how they work. The essential added ingredient in this class of models is meridional circulation, which governs the dynamo period and also plays a crucial role in determining the Sun’s memory about its past magnetic fields.We show that flux-transport dynamo models can explain many key features of solar cycles. Then we show that a predictive tool can be built from this class of dynamo that can be used to predict mean solar cycle features by assimilating magnetic field data from previous cycles.
PREDICTIVE CAPACITY OF ARCH FAMILY MODELS
Raphael Silveira Amaro
2016-03-01
Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.
Predictive QSAR modeling of phosphodiesterase 4 inhibitors.
Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr
2012-02-01
A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. Copyright © 2011 Elsevier Inc. All rights reserved.
Model Penilaian dan Pemilihan Trade Show Bagi Industri Kreatif di Sektor Mode
Afrin Fauzya Rizana
2017-07-01
Full Text Available The article identifies the criteria for choosing a trade show and develops a basic model of exhibition selection for creative industry players before deciding to participate in a trade show. It is necessary to ensure that expenses in terms of business, money, and time, will be worth the results. Based on literature review and interviews, six criteria were used, namely location, booth position, organizational reputation, cost estimation, prestige, and reputation of other participants. After selection criteria are identified, then calculations are performed to measure the criteria weight by using the AHP approach. Based on weight calculations, it was found that booth positions had the highest importance weight, followed by trade show location, organizers reputation, cost estimation, prestige and reputation of other participants. The weight value is then used to calculate the trade show's prediction value. The predicted value generated from the model is then compared to the value of the past data. The model has an accuracy rate of 89% and does not have a significant difference between the value generated by the model and the value of the past data.
Caries risk assessment models in caries prediction
Amila Zukanović
2013-11-01
Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.
PREDICT : model for prediction of survival in localized prostate cancer
Kerkmeijer, Linda G W; Monninkhof, Evelyn M.; van Oort, Inge M.; van der Poel, Henk G.; de Meerleer, Gert; van Vulpen, Marco
2016-01-01
Purpose: Current models for prediction of prostate cancer-specific survival do not incorporate all present-day interventions. In the present study, a pre-treatment prediction model for patients with localized prostate cancer was developed.Methods: From 1989 to 2008, 3383 patients were treated with I
Calibrated predictions for multivariate competing risks models.
Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni
2014-04-01
Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.
Jessica E. Salvatore
2014-04-01
Full Text Available Alcohol problems represent a classic example of a complex behavioral outcome that is likely influenced by many genes of small effect. A polygenic approach, which examines aggregate measured genetic effects, can have predictive power in cases where individual genes or genetic variants do not. In the current study, we first tested whether polygenic risk for alcohol problems—derived from genome-wide association estimates of an alcohol problems factor score from the age 18 assessment of the Avon Longitudinal Study of Parents and Children (ALSPAC; n = 4304 individuals of European descent; 57% female—predicted alcohol problems earlier in development (age 14 in an independent sample (FinnTwin12; n = 1162; 53% female. We then tested whether environmental factors (parental knowledge and peer deviance moderated polygenic risk to predict alcohol problems in the FinnTwin12 sample. We found evidence for both polygenic association and for additive polygene-environment interaction. Higher polygenic scores predicted a greater number of alcohol problems (range of Pearson partial correlations 0.07–0.08, all p-values ≤ 0.01. Moreover, genetic influences were significantly more pronounced under conditions of low parental knowledge or high peer deviance (unstandardized regression coefficients (b, p-values (p, and percent of variance (R2 accounted for by interaction terms: b = 1.54, p = 0.02, R2 = 0.33%; b = 0.94, p = 0.04, R2 = 0.30%, respectively. Supplementary set-based analyses indicated that the individual top single nucleotide polymorphisms (SNPs contributing to the polygenic scores were not individually enriched for gene-environment interaction. Although the magnitude of the observed effects are small, this study illustrates the usefulness of polygenic approaches for understanding the pathways by which measured genetic predispositions come together with environmental factors to predict complex behavioral outcomes.
Salvatore, Jessica E.; Aliev, Fazil; Edwards, Alexis C.; Evans, David M.; Macleod, John; Hickman, Matthew; Lewis, Glyn; Kendler, Kenneth S.; Loukola, Anu; Korhonen, Tellervo; Latvala, Antti; Rose, Richard J.; Kaprio, Jaakko; Dick, Danielle M.
2014-01-01
Alcohol problems represent a classic example of a complex behavioral outcome that is likely influenced by many genes of small effect. A polygenic approach, which examines aggregate measured genetic effects, can have predictive power in cases where individual genes or genetic variants do not. In the current study, we first tested whether polygenic risk for alcohol problems—derived from genome-wide association estimates of an alcohol problems factor score from the age 18 assessment of the Avon Longitudinal Study of Parents and Children (ALSPAC; n = 4304 individuals of European descent; 57% female)—predicted alcohol problems earlier in development (age 14) in an independent sample (FinnTwin12; n = 1162; 53% female). We then tested whether environmental factors (parental knowledge and peer deviance) moderated polygenic risk to predict alcohol problems in the FinnTwin12 sample. We found evidence for both polygenic association and for additive polygene-environment interaction. Higher polygenic scores predicted a greater number of alcohol problems (range of Pearson partial correlations 0.07–0.08, all p-values ≤ 0.01). Moreover, genetic influences were significantly more pronounced under conditions of low parental knowledge or high peer deviance (unstandardized regression coefficients (b), p-values (p), and percent of variance (R2) accounted for by interaction terms: b = 1.54, p = 0.02, R2 = 0.33%; b = 0.94, p = 0.04, R2 = 0.30%, respectively). Supplementary set-based analyses indicated that the individual top single nucleotide polymorphisms (SNPs) contributing to the polygenic scores were not individually enriched for gene-environment interaction. Although the magnitude of the observed effects are small, this study illustrates the usefulness of polygenic approaches for understanding the pathways by which measured genetic predispositions come together with environmental factors to predict complex behavioral outcomes. PMID:24727307
NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES
R. G. SILVA
1999-03-01
Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.
ENSO Prediction using Vector Autoregressive Models
Chapman, D. R.; Cane, M. A.; Henderson, N.; Lee, D.; Chen, C.
2013-12-01
A recent comparison (Barnston et al, 2012 BAMS) shows the ENSO forecasting skill of dynamical models now exceeds that of statistical models, but the best statistical models are comparable to all but the very best dynamical models. In this comparison the leading statistical model is the one based on the Empirical Model Reduction (EMR) method. Here we report on experiments with multilevel Vector Autoregressive models using only sea surface temperatures (SSTs) as predictors. VAR(L) models generalizes Linear Inverse Models (LIM), which are a VAR(1) method, as well as multilevel univariate autoregressive models. Optimal forecast skill is achieved using 12 to 14 months of prior state information (i.e 12-14 levels), which allows SSTs alone to capture the effects of other variables such as heat content as well as seasonality. The use of multiple levels allows the model advancing one month at a time to perform at least as well for a 6 month forecast as a model constructed to explicitly forecast 6 months ahead. We infer that the multilevel model has fully captured the linear dynamics (cf. Penland and Magorian, 1993 J. Climate). Finally, while VAR(L) is equivalent to L-level EMR, we show in a 150 year cross validated assessment that we can increase forecast skill by improving on the EMR initialization procedure. The greatest benefit of this change is in allowing the prediction to make effective use of information over many more months.
Predictive Modeling of Cardiac Ischemia
Anderson, Gary T.
1996-01-01
The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.
Characterizing Attention with Predictive Network Models.
Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M
2017-04-01
Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.
A model comparison approach shows stronger support for economic models of fertility decline.
Shenk, Mary K; Towner, Mary C; Kress, Howard C; Alam, Nurul
2013-05-14
The demographic transition is an ongoing global phenomenon in which high fertility and mortality rates are replaced by low fertility and mortality. Despite intense interest in the causes of the transition, especially with respect to decreasing fertility rates, the underlying mechanisms motivating it are still subject to much debate. The literature is crowded with competing theories, including causal models that emphasize (i) mortality and extrinsic risk, (ii) the economic costs and benefits of investing in self and children, and (iii) the cultural transmission of low-fertility social norms. Distinguishing between models, however, requires more comprehensive, better-controlled studies than have been published to date. We use detailed demographic data from recent fieldwork to determine which models produce the most robust explanation of the rapid, recent demographic transition in rural Bangladesh. To rigorously compare models, we use an evidence-based statistical approach using model selection techniques derived from likelihood theory. This approach allows us to quantify the relative evidence the data give to alternative models, even when model predictions are not mutually exclusive. Results indicate that fertility, measured as either total fertility or surviving children, is best explained by models emphasizing economic factors and related motivations for parental investment. Our results also suggest important synergies between models, implicating multiple causal pathways in the rapidity and degree of recent demographic transitions.
Ana Carla Pereira de Araujo
2014-11-01
Full Text Available Background: Studies have demonstrated the diagnostic accuracy and prognostic value of physical stress echocardiography in coronary artery disease. However, the prediction of mortality and major cardiac events in patients with exercise test positive for myocardial ischemia is limited. Objective: To evaluate the effectiveness of physical stress echocardiography in the prediction of mortality and major cardiac events in patients with exercise test positive for myocardial ischemia. Methods: This is a retrospective cohort in which 866 consecutive patients with exercise test positive for myocardial ischemia, and who underwent physical stress echocardiography were studied. Patients were divided into two groups: with physical stress echocardiography negative (G1 or positive (G2 for myocardial ischemia. The endpoints analyzed were all-cause mortality and major cardiac events, defined as cardiac death and non-fatal acute myocardial infarction. Results: G2 comprised 205 patients (23.7%. During the mean 85.6 ± 15.0-month follow-up, there were 26 deaths, of which six were cardiac deaths, and 25 non-fatal myocardial infarction cases. The independent predictors of mortality were: age, diabetes mellitus, and positive physical stress echocardiography (hazard ratio: 2.69; 95% confidence interval: 1.20 - 6.01; p = 0.016. The independent predictors of major cardiac events were: age, previous coronary artery disease, positive physical stress echocardiography (hazard ratio: 2.75; 95% confidence interval: 1.15 - 6.53; p = 0.022 and absence of a 10% increase in ejection fraction. All-cause mortality and the incidence of major cardiac events were significantly higher in G2 (p < 0. 001 and p = 0.001, respectively. Conclusion: Physical stress echocardiography provides additional prognostic information in patients with exercise test positive for myocardial ischemia.
Seasonal Predictability in a Model Atmosphere.
Lin, Hai
2001-07-01
The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.
Nonconvex Model Predictive Control for Commercial Refrigeration
Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp
2013-01-01
is to minimize the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...
Return Predictability, Model Uncertainty, and Robust Investment
Lukas, Manuel
Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...
Prediction models from CAD models of 3D objects
Camps, Octavia I.
1992-11-01
In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.
Artificial Neural Network Model for Predicting Compressive
Salim T. Yousif
2013-05-01
Full Text Available Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature. The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor affecting the output of the model. The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.
Foundation Settlement Prediction Based on a Novel NGM Model
Peng-Yu Chen
2014-01-01
Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.
Constructing predictive models of human running.
Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre
2015-02-06
Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Predictive Model Assessment for Count Data
2007-09-05
critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002
Araujo, Ana Carla Pereira de; Santos, Bruno F de Oliveira; Calasans, Flavia Ricci; Pinto, Ibraim M Francisco; Oliveira, Daniel Pio de; Melo, Luiza Dantas; Andrade, Stephanie Macedo; Tavares, Irlaneide da Silva; Sousa, Antonio Carlos Sobral; Oliveira, Joselina Luzia Menezes
2014-11-01
Background: Studies have demonstrated the diagnostic accuracy and prognostic value of physical stress echocardiography in coronary artery disease. However, the prediction of mortality and major cardiac events in patients with exercise test positive for myocardial ischemia is limited. Objective: To evaluate the effectiveness of physical stress echocardiography in the prediction of mortality and major cardiac events in patients with exercise test positive for myocardial ischemia. Methods: This is a retrospective cohort in which 866 consecutive patients with exercise test positive for myocardial ischemia, and who underwent physical stress echocardiography were studied. Patients were divided into two groups: with physical stress echocardiography negative (G1) or positive (G2) for myocardial ischemia. The endpoints analyzed were all-cause mortality and major cardiac events, defined as cardiac death and non-fatal acute myocardial infarction. Results: G2 comprised 205 patients (23.7%). During the mean 85.6 ± 15.0-month follow-up, there were 26 deaths, of which six were cardiac deaths, and 25 non-fatal myocardial infarction cases. The independent predictors of mortality were: age, diabetes mellitus, and positive physical stress echocardiography (hazard ratio: 2.69; 95% confidence interval: 1.20 - 6.01; p = 0.016). The independent predictors of major cardiac events were: age, previous coronary artery disease, positive physical stress echocardiography (hazard ratio: 2.75; 95% confidence interval: 1.15 - 6.53; p = 0.022) and absence of a 10% increase in ejection fraction. All-cause mortality and the incidence of major cardiac events were significantly higher in G2 (p ecocardiografia com estresse físico na doença arterial coronária, mas a predição de mortalidade e de eventos cardíacos maiores, em pacientes com teste ergométrico positivo para isquemia miocárdica, é limitada. Objetivo: Avaliar a predição de mortalidade e de eventos cardíacos maiores pela
Nonlinear chaotic model for predicting storm surges
Siek, M.; Solomatine, D.P.
This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables.
EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH
Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.
2014-01-01
The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain,...
How to Establish Clinical Prediction Models
Yong-ho Lee
2016-03-01
Full Text Available A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice.
Predicting Protein Secondary Structure with Markov Models
Fischer, Paul; Larsen, Simon; Thomsen, Claus
2004-01-01
we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained......The primary structure of a protein is the sequence of its amino acids. The secondary structure describes structural properties of the molecule such as which parts of it form sheets, helices or coils. Spacial and other properties are described by the higher order structures. The classification task...
Predictive modeling by the cerebellum improves proprioception.
Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J
2013-09-04
Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.
Comparison of Prediction-Error-Modelling Criteria
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....
Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.
Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F
2013-04-01
In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.
Prediction Model of Sewing Technical Condition by Grey Neural Network
DONG Ying; FANG Fang; ZHANG Wei-yuan
2007-01-01
The grey system theory and the artificial neural network technology were applied to predict the sewing technical condition. The representative parameters, such as needle, stitch, were selected. Prediction model was established based on the different fabrics' mechanical properties that measured by KES instrument. Grey relevant degree analysis was applied to choose the input parameters of the neural network. The result showed that prediction model has good precision. The average relative error was 4.08% for needle and 4.25% for stitch.
Case studies in archaeological predictive modelling
Verhagen, Jacobus Wilhelmus Hermanus Philippus
2007-01-01
In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing p
Childhood asthma prediction models: a systematic review.
Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup
2015-12-01
Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.
Model predictive control classical, robust and stochastic
Kouvaritakis, Basil
2016-01-01
For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...
Prediction using patient comparison vs. modeling: a case study for mortality prediction.
Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter
2016-08-01
Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.
Combining logistic regression and neural networks to create predictive models.
Spackman, K. A.
1992-01-01
Neural networks are being used widely in medicine and other areas to create predictive models from data. The statistical method that most closely parallels neural networks is logistic regression. This paper outlines some ways in which neural networks and logistic regression are similar, shows how a small modification of logistic regression can be used in the training of neural network models, and illustrates the use of this modification for variable selection and predictive model building wit...
Energy based prediction models for building acoustics
Brunskog, Jonas
2012-01-01
In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...
Jane S Paulsen
2014-04-01
Full Text Available There is growing consensus that intervention and treatment of Huntington disease (HD should occur at the earliest stage possible. Various early-intervention methods for this fatal neurodegenerative disease have been identified, but preventive clinical trials for HD are limited by a lack of knowledge of the natural history of the disease and a dearth of appropriate outcome measures. Objectives of the current study are to document the natural history of premanifest HD progression in the largest cohort ever studied and to develop a battery of imaging and clinical markers of premanifest HD progression that can be used as outcome measures in preventive clinical trials. PREDICT-HD is a 32-site, international, observational study of premanifest HD, with annual examination of 1013 participants with premanifest HD and 301 gene-expansion negative controls between 2001 and 2012. Findings document 39 variables representing imaging, motor, cognitive, functional, and psychiatric domains, showing different rates of decline between premanifest Huntington disease and controls. Required sample size and models of premanifest HD are presented to inform future design of clinical and preclinical research. Preventive clinical trials in premanifest HD with participants who have a medium or high probability of motor onset are calculated to be as resource-effective as those conducted in diagnosed HD and could interrupt disease seven to twelve years earlier. Methods and measures for preventive clinical trials in premanifest HD more than a dozen years from motor onset are also feasible. These findings represent the most thorough documentation of a clinical battery for experimental therapeutics in stages of premanifest HD, the time period for which effective intervention may provide the most positive possible outcome for patients and their families affected by this devastating disease.
Massive Predictive Modeling using Oracle R Enterprise
CERN. Geneva
2014-01-01
R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...
Eisenbarth, Hedwig; Osterheider, Michael; Nedopil, Norbert; Stadtland, Cornelis
2012-01-01
A clear and structured approach to evidence-based and gender-specific risk assessment of violence in female offenders is high on political and mental health agendas. However, most data on the factors involved in risk-assessment instruments are based on data of male offenders. The aim of the present study was to validate the use of the Psychopathy Checklist Revised (PCL-R), the HCR-20 and the Violence Risk Appraisal Guide (VRAG) for the prediction of recidivism in German female offenders. This study is part of the Munich Prognosis Project (MPP). It focuses on a subsample of female delinquents (n = 80) who had been referred for forensic-psychiatric evaluation prior to sentencing. The mean time at risk was 8 years (SD = 5 years; range: 1-18 years). During this time, 31% (n = 25) of the female offenders were reconvicted, 5% (n = 4) for violent and 26% (n = 21) for non-violent re-offenses. The predictive validity of the PCL-R for general recidivism was calculated. Analysis with receiver-operating characteristics revealed that the PCL-R total score, the PCL-R antisocial lifestyle factor, the PCL-R lifestyle factor and the PCL-R impulsive and irresponsible behavioral style factor had a moderate predictive validity for general recidivism (area under the curve, AUC = 0.66, p = 0.02). The VRAG has also demonstrated predictive validity (AUC = 0.72, p = 0.02), whereas the HCR-20 showed no predictive validity. These results appear to provide the first evidence that the PCL-R total score and the antisocial lifestyle factor are predictive for general female recidivism, as has been shown consistently for male recidivists. The implications of these findings for crime prevention, prognosis in women, and future research are discussed.
Liver Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Colorectal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Cervical Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Prostate Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Pancreatic Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Colorectal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Bladder Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Esophageal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Lung Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Breast Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Ovarian Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Testicular Cancer Risk Prediction Models
Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Posterior Predictive Model Checking in Bayesian Networks
Crawford, Aaron
2014-01-01
This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…
A Course in... Model Predictive Control.
Arkun, Yaman; And Others
1988-01-01
Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)
Predictive Cache Modeling and Analysis
2011-11-01
Extension of the Synthetic Application Generation Framework ........................................................... 35 5.2.2. Investigating Other...Handlers Target Hardware (Processor, Memory, I/O devices) Application Main Program = Object = Process Legend: Middleware/ Framework Virtual memory Partition... metaheuristic /bin-packing algorithm to optimize task placement based on task communication characterization. Our previous work on task allocation showed
Two criteria for evaluating risk prediction models.
Pfeiffer, R M; Gail, M H
2011-09-01
We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.
Predictability of extreme values in geophysical models
A. E. Sterk
2012-09-01
Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.
Moarefian, Maryam; Pascal, Jennifer A
2016-02-01
Biobarriers imposed by the tumor microenvironment create a challenge to deliver chemotherapeutics effectively. Electric fields can be used to overcome these biobarriers in the form of electrochemotherapy, or by applying an electric field to tissue after chemotherapy has been delivered systemically. A fundamental understanding of the underlying physical phenomena governing tumor response to an applied electrical field is lacking. Building upon the work of Pascal et al. [1], a mathematical model that predicts the fraction of tumor killed due to a direct current (DC) applied electrical field and chemotherapy is developed here for tumor tissue surrounding a single, straight, cylindrical blood vessel. Results show the typical values of various parameters related to properties of the electrical field, tumor tissue and chemotherapy drug that have the most significant influence on the fraction of tumor killed. We show that the applied electrical field enhances tumor death due to chemotherapy and that the direction and magnitude of the applied electrical field have a significant impact on the fraction of tumor killed. Published by Elsevier Inc.
Hybrid modeling and prediction of dynamical systems
Lloyd, Alun L.; Flores, Kevin B.
2017-01-01
Scientific analysis often relies on the ability to make accurate predictions of a system’s dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model’s equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data. PMID:28692642
Risk terrain modeling predicts child maltreatment.
Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye
2016-12-01
As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children.
Property predictions using microstructural modeling
Wang, K.G. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)]. E-mail: wangk2@rpi.edu; Guo, Z. [Sente Software Ltd., Surrey Technology Centre, 40 Occam Road, Guildford GU2 7YG (United Kingdom); Sha, W. [Metals Research Group, School of Civil Engineering, Architecture and Planning, The Queen' s University of Belfast, Belfast BT7 1NN (United Kingdom); Glicksman, M.E. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States); Rajan, K. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)
2005-07-15
Precipitation hardening in an Fe-12Ni-6Mn maraging steel during overaging is quantified. First, applying our recent kinetic model of coarsening [Phys. Rev. E, 69 (2004) 061507], and incorporating the Ashby-Orowan relationship, we link quantifiable aspects of the microstructures of these steels to their mechanical properties, including especially the hardness. Specifically, hardness measurements allow calculation of the precipitate size as a function of time and temperature through the Ashby-Orowan relationship. Second, calculated precipitate sizes and thermodynamic data determined with Thermo-Calc[copyright] are used with our recent kinetic coarsening model to extract diffusion coefficients during overaging from hardness measurements. Finally, employing more accurate diffusion parameters, we determined the hardness of these alloys independently from theory, and found agreement with experimental hardness data. Diffusion coefficients determined during overaging of these steels are notably higher than those found during the aging - an observation suggesting that precipitate growth during aging and precipitate coarsening during overaging are not controlled by the same diffusion mechanism.
Spatial Economics Model Predicting Transport Volume
Lu Bo
2016-10-01
Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.
MULTI MODEL DATA MINING APPROACH FOR HEART FAILURE PREDICTION
Priyanka H U
2016-09-01
Full Text Available Developing predictive modelling solutions for risk estimation is extremely challenging in health-care informatics. Risk estimation involves integration of heterogeneous clinical sources having different representation from different health-care provider making the task increasingly complex. Such sources are typically voluminous, diverse, and significantly change over the time. Therefore, distributed and parallel computing tools collectively termed big data tools are in need which can synthesize and assist the physician to make right clinical decisions. In this work we propose multi-model predictive architecture, a novel approach for combining the predictive ability of multiple models for better prediction accuracy. We demonstrate the effectiveness and efficiency of the proposed work on data from Framingham Heart study. Results show that the proposed multi-model predictive architecture is able to provide better accuracy than best model approach. By modelling the error of predictive models we are able to choose sub set of models which yields accurate results. More information was modelled into system by multi-level mining which has resulted in enhanced predictive accuracy.
Model-based uncertainty in species range prediction
Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel;
2006-01-01
Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...... day (using the area under the receiver operating characteristic curve (AUC) and kappa statistics) and by assessing consistency in predictions of range size changes under future climate (using cluster analysis). Results Our analyses show significant differences between predictions from different models......, with predicted changes in range size by 2030 differing in both magnitude and direction (e.g. from 92% loss to 322% gain). We explain differences with reference to two characteristics of the modelling techniques: data input requirements (presence/absence vs. presence-only approaches) and assumptions made by each...
A new ensemble model for short term wind power prediction
Madsen, Henrik; Albu, Razvan-Daniel; Felea, Ioan;
2012-01-01
As the objective of this study, a non-linear ensemble system is used to develop a new model for predicting wind speed in short-term time scale. Short-term wind power prediction becomes an extremely important field of research for the energy sector. Regardless of the recent advancements in the re......-search of prediction models, it was observed that different models have different capabilities and also no single model is suitable under all situations. The idea behind EPS (ensemble prediction systems) is to take advantage of the unique features of each subsystem to detain diverse patterns that exist in the dataset....... The conferred results show that the prediction errors can be decreased, while the computation time is reduced....
Modeling and Prediction Using Stochastic Differential Equations
Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp
2016-01-01
Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...
Precision Plate Plan View Pattern Predictive Model
ZHAO Yang; YANG Quan; HE An-rui; WANG Xiao-chen; ZHANG Yun
2011-01-01
According to the rolling features of plate mill, a 3D elastic-plastic FEM （finite element model） based on full restart method of ANSYS/LS-DYNA was established to study the inhomogeneous plastic deformation of multipass plate rolling. By analyzing the simulation results, the difference of head and tail ends predictive models was found and modified. According to the numerical simulation results of 120 different kinds of conditions, precision plate plan view pattern predictive model was established. Based on these models, the sizing MAS （mizushima automatic plan view pattern control system） method was designed and used on a 2 800 mm plate mill. Comparing the rolled plates with and without PVPP （plan view pattern predictive） model, the reduced width deviation indicates that the olate !olan view Dattern predictive model is preeise.
NBC Hazard Prediction Model Capability Analysis
1999-09-01
Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented
Personalized Predictive Modeling and Risk Factor Identification using Patient Similarity.
Ng, Kenney; Sun, Jimeng; Hu, Jianying; Wang, Fei
2015-01-01
Personalized predictive models are customized for an individual patient and trained using information from similar patients. Compared to global models trained on all patients, they have the potential to produce more accurate risk scores and capture more relevant risk factors for individual patients. This paper presents an approach for building personalized predictive models and generating personalized risk factor profiles. A locally supervised metric learning (LSML) similarity measure is trained for diabetes onset and used to find clinically similar patients. Personalized risk profiles are created by analyzing the parameters of the trained personalized logistic regression models. A 15,000 patient data set, derived from electronic health records, is used to evaluate the approach. The predictive results show that the personalized models can outperform the global model. Cluster analysis of the risk profiles show groups of patients with similar risk factors, differences in the top risk factors for different groups of patients and differences between the individual and global risk factors.
Performance Predictable ServiceBSP Model for Grid Computing
TONG Weiqin; MIAO Weikai
2007-01-01
This paper proposes a performance prediction model for grid computing model ServiceBSP to support developing high quality applications in grid environment. In ServiceBSP model,the agents carrying computing tasks are dispatched to the local domain of the selected computation services. By using the IP (integer program) approach, the Service Selection Agent selects the computation services with global optimized QoS (quality of service) consideration. The performance of a ServiceBSP application can be predicted according to the performance prediction model based on the QoS of the selected services. The performance prediction model can help users to analyze their applications and improve them by optimized the factors which affects the performance. The experiment shows that the Service Selection Agent can provide ServiceBSP users with satisfied QoS of applications.
A COMPARISON BETWEEN THREE PREDICTIVE MODELS OF COMPUTATIONAL INTELLIGENCE
DUMITRU CIOBANU
2013-12-01
Full Text Available Time series prediction is an open problem and many researchers are trying to find new predictive methods and improvements for the existing ones. Lately methods based on neural networks are used extensively for time series prediction. Also, support vector machines have solved some of the problems faced by neural networks and they began to be widely used for time series prediction. The main drawback of those two methods is that they are global models and in the case of a chaotic time series it is unlikely to find such model. In this paper it is presented a comparison between three predictive from computational intelligence field one based on neural networks one based on support vector machine and another based on chaos theory. We show that the model based on chaos theory is an alternative to the other two methods.
Evaluation of CASP8 model quality predictions
Cozzetto, Domenico
2009-01-01
The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.
Wu, Jie; Ren, Hong-Li; Zuo, Jinqing; Zhao, Chongbo; Chen, Lijuan; Li, Qiaoping
2016-09-01
This study evaluates performance of Madden-Julian oscillation (MJO) prediction in the Beijing Climate Center Atmospheric General Circulation Model (BCC_AGCM2.2). By using the real-time multivariate MJO (RMM) indices, it is shown that the MJO prediction skill of BCC_AGCM2.2 extends to about 16-17 days before the bivariate anomaly correlation coefficient drops to 0.5 and the root-mean-square error increases to the level of the climatological prediction. The prediction skill showed a seasonal dependence, with the highest skill occurring in boreal autumn, and a phase dependence with higher skill for predictions initiated from phases 2-4. The results of the MJO predictability analysis showed that the upper bounds of the prediction skill can be extended to 26 days by using a single-member estimate, and to 42 days by using the ensemble-mean estimate, which also exhibited an initial amplitude and phase dependence. The observed relationship between the MJO and the North Atlantic Oscillation was accurately reproduced by BCC_AGCM2.2 for most initial phases of the MJO, accompanied with the Rossby wave trains in the Northern Hemisphere extratropics driven by MJO convection forcing. Overall, BCC_AGCM2.2 displayed a significant ability to predict the MJO and its teleconnections without interacting with the ocean, which provided a useful tool for fully extracting the predictability source of subseasonal prediction.
Genetic models of homosexuality: generating testable predictions
Gavrilets, Sergey; Rice, William R.
2006-01-01
Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality inclu...
Wind farm production prediction - The Zephyr model
Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)
2002-06-01
This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)
Predictive modeling for EBPC in EBDW
Zimmermann, Rainer; Schulz, Martin; Hoppe, Wolfgang; Stock, Hans-Jürgen; Demmerle, Wolfgang; Zepka, Alex; Isoyan, Artak; Bomholt, Lars; Manakli, Serdar; Pain, Laurent
2009-10-01
We demonstrate a flow for e-beam proximity correction (EBPC) to e-beam direct write (EBDW) wafer manufacturing processes, demonstrating a solution that covers all steps from the generation of a test pattern for (experimental or virtual) measurement data creation, over e-beam model fitting, proximity effect correction (PEC), and verification of the results. We base our approach on a predictive, physical e-beam simulation tool, with the possibility to complement this with experimental data, and the goal of preparing the EBPC methods for the advent of high-volume EBDW tools. As an example, we apply and compare dose correction and geometric correction for low and high electron energies on 1D and 2D test patterns. In particular, we show some results of model-based geometric correction as it is typical for the optical case, but enhanced for the particularities of e-beam technology. The results are used to discuss PEC strategies, with respect to short and long range effects.
Predictive model for segmented poly(urea
Frankl P.
2012-08-01
Full Text Available Segmented poly(urea has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM – a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.
Assessment of performance of survival prediction models for cancer prognosis
Chen Hung-Chia
2012-07-01
its discriminatory ability. 2 Evaluation using a high-risk versus low-risk group comparison depends on the selected risk-score threshold; a plot of p-values from all possible thresholds can show the sensitivity of the threshold selection. 3 A randomization test of the significance of Somers’ rank correlation can be used for further evaluation of performance of a prediction model. 4 The cross-validated power of survival prediction models decreases as the training and test sets become less balanced.
A Predictive Model of High Shear Thrombus Growth.
Mehrabadi, Marmar; Casa, Lauren D C; Aidun, Cyrus K; Ku, David N
2016-08-01
The ability to predict the timescale of thrombotic occlusion in stenotic vessels may improve patient risk assessment for thrombotic events. In blood contacting devices, thrombosis predictions can lead to improved designs to minimize thrombotic risks. We have developed and validated a model of high shear thrombosis based on empirical correlations between thrombus growth and shear rate. A mathematical model was developed to predict the growth of thrombus based on the hemodynamic shear rate. The model predicts thrombus deposition based on initial geometric and fluid mechanic conditions, which are updated throughout the simulation to reflect the changing lumen dimensions. The model was validated by comparing predictions against actual thrombus growth in six separate in vitro experiments: stenotic glass capillary tubes (diameter = 345 µm) at three shear rates, the PFA-100(®) system, two microfluidic channel dimensions (heights = 300 and 82 µm), and a stenotic aortic graft (diameter = 5.5 mm). Comparison of the predicted occlusion times to experimental results shows excellent agreement. The model is also applied to a clinical angiography image to illustrate the time course of thrombosis in a stenotic carotid artery after plaque cap rupture. Our model can accurately predict thrombotic occlusion time over a wide range of hemodynamic conditions.
Modelling the predictive performance of credit scoring
Shi-Wei Shen
2013-02-01
Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.
Modelling language evolution: Examples and predictions.
Gong, Tao; Shuai, Lan; Zhang, Menghan
2014-06-01
We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.
Modelling language evolution: Examples and predictions
Gong, Tao; Shuai, Lan; Zhang, Menghan
2014-06-01
We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.
Model Predictive Control of Sewer Networks
Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik; Poulsen, Niels K.; Falk, Anne K. V.
2017-01-01
The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and controlled have thus become essential factors for effcient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control.
A Multistep Chaotic Model for Municipal Solid Waste Generation Prediction.
Song, Jingwei; He, Jiaying
2014-08-01
In this study, a univariate local chaotic model is proposed to make one-step and multistep forecasts for daily municipal solid waste (MSW) generation in Seattle, Washington. For MSW generation prediction with long history data, this forecasting model was created based on a nonlinear dynamic method called phase-space reconstruction. Compared with other nonlinear predictive models, such as artificial neural network (ANN) and partial least square-support vector machine (PLS-SVM), and a commonly used linear seasonal autoregressive integrated moving average (sARIMA) model, this method has demonstrated better prediction accuracy from 1-step ahead prediction to 14-step ahead prediction assessed by both mean absolute percentage error (MAPE) and root mean square error (RMSE). Max error, MAPE, and RMSE show that chaotic models were more reliable than the other three models. As chaotic models do not involve random walk, their performance does not vary while ANN and PLS-SVM make different forecasts in each trial. Moreover, this chaotic model was less time consuming than ANN and PLS-SVM models.
Using Pareto points for model identification in predictive toxicology.
Palczewska, Anna; Neagu, Daniel; Ridley, Mick
2013-03-22
: Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology.
Probabilistic Modeling and Visualization for Bankruptcy Prediction
Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara
2017-01-01
In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...... studies on bankruptcy detection, seldom probabilistic approaches were carried out. In this paper we assume a probabilistic point-of-view by applying Gaussian Processes (GP) in the context of bankruptcy prediction, comparing it against the Support Vector Machines (SVM) and the Logistic Regression (LR......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...
DKIST Polarization Modeling and Performance Predictions
Harrington, David
2016-05-01
Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration
Modelling Chemical Reasoning to Predict Reactions
Segler, Marwin H. S.; Waller, Mark P.
2016-01-01
The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outpe...
Predictive Modeling of the CDRA 4BMS
Coker, Robert; Knox, James
2016-01-01
Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.
Raman Model Predicting Hardness of Covalent Crystals
Zhou, Xiang-Feng; Qian, Quang-Rui; Sun, Jian; Tian, Yongjun; Wang, Hui-Tian
2009-01-01
Based on the fact that both hardness and vibrational Raman spectrum depend on the intrinsic property of chemical bonds, we propose a new theoretical model for predicting hardness of a covalent crystal. The quantitative relationship between hardness and vibrational Raman frequencies deduced from the typical zincblende covalent crystals is validated to be also applicable for the complex multicomponent crystals. This model enables us to nondestructively and indirectly characterize the hardness o...
Predictive Modelling of Mycotoxins in Cereals
Fels, van der H.J.; Liu, C.
2015-01-01
In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts
Unreachable Setpoints in Model Predictive Control
Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp
2008-01-01
steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...
Predictive Modelling of Mycotoxins in Cereals
Fels, van der H.J.; Liu, C.
2015-01-01
In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ
Prediction modelling for population conviction data
Tollenaar, N.
2017-01-01
In this thesis, the possibilities of using prediction models for judicial penal case data are investigated. The development and refinement of a risk taxation scale based on these data is discussed. When false positives are weighted equally severe as false negatives, 70% can be classified correctly.
A Predictive Model for MSSW Student Success
Napier, Angela Michele
2011-01-01
This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…
Predictability of extreme values in geophysical models
Sterk, A.E.; Holland, M.P.; Rabassa, P.; Broer, H.W.; Vitolo, R.
2012-01-01
Extreme value theory in deterministic systems is concerned with unlikely large (or small) values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical model
A revised prediction model for natural conception
Bensdorp, A.J.; Steeg, J.W. van der; Steures, P.; Habbema, J.D.; Hompes, P.G.; Bossuyt, P.M.; Veen, F. van der; Mol, B.W.; Eijkemans, M.J.; Kremer, J.A.M.; et al.,
2017-01-01
One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis
Distributed Model Predictive Control via Dual Decomposition
Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle
2014-01-01
This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...
Predictive Modelling of Mycotoxins in Cereals
Fels, van der H.J.; Liu, C.
2015-01-01
In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ
Leptogenesis in minimal predictive seesaw models
Björkeroth, Fredrik; Varzielas, Ivo de Medeiros; King, Stephen F
2015-01-01
We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\
Model predictive torque control with an extended prediction horizon for electrical drive systems
Wang, Fengxiang; Zhang, Zhenbin; Kennel, Ralph; Rodríguez, José
2015-07-01
This paper presents a model predictive torque control method for electrical drive systems. A two-step prediction horizon is achieved by considering the reduction of the torque ripples. The electromagnetic torque and the stator flux error between predicted values and the references, and an over-current protection are considered in the cost function design. The best voltage vector is selected by minimising the value of the cost function, which aims to achieve a low torque ripple in two intervals. The study is carried out experimentally. The results show that the proposed method achieves good performance in both steady and transient states.
Predictive modeling of coral disease distribution within a reef system.
Gareth J Williams
Full Text Available Diseases often display complex and distinct associations with their environment due to differences in etiology, modes of transmission between hosts, and the shifting balance between pathogen virulence and host resistance. Statistical modeling has been underutilized in coral disease research to explore the spatial patterns that result from this triad of interactions. We tested the hypotheses that: 1 coral diseases show distinct associations with multiple environmental factors, 2 incorporating interactions (synergistic collinearities among environmental variables is important when predicting coral disease spatial patterns, and 3 modeling overall coral disease prevalence (the prevalence of multiple diseases as a single proportion value will increase predictive error relative to modeling the same diseases independently. Four coral diseases: Porites growth anomalies (PorGA, Porites tissue loss (PorTL, Porites trematodiasis (PorTrem, and Montipora white syndrome (MWS, and their interactions with 17 predictor variables were modeled using boosted regression trees (BRT within a reef system in Hawaii. Each disease showed distinct associations with the predictors. Environmental predictors showing the strongest overall associations with the coral diseases were both biotic and abiotic. PorGA was optimally predicted by a negative association with turbidity, PorTL and MWS by declines in butterflyfish and juvenile parrotfish abundance respectively, and PorTrem by a modal relationship with Porites host cover. Incorporating interactions among predictor variables contributed to the predictive power of our models, particularly for PorTrem. Combining diseases (using overall disease prevalence as the model response, led to an average six-fold increase in cross-validation predictive deviance over modeling the diseases individually. We therefore recommend coral diseases to be modeled separately, unless known to have etiologies that respond in a similar manner to
Specialized Language Models using Dialogue Predictions
Popovici, C; Popovici, Cosmin; Baggia, Paolo
1996-01-01
This paper analyses language modeling in spoken dialogue systems for accessing a database. The use of several language models obtained by exploiting dialogue predictions gives better results than the use of a single model for the whole dialogue interaction. For this reason several models have been created, each one for a specific system question, such as the request or the confirmation of a parameter. The use of dialogue-dependent language models increases the performance both at the recognition and at the understanding level, especially on answers to system requests. Moreover other methods to increase performance, like automatic clustering of vocabulary words or the use of better acoustic models during recognition, does not affect the improvements given by dialogue-dependent language models. The system used in our experiments is Dialogos, the Italian spoken dialogue system used for accessing railway timetable information over the telephone. The experiments were carried out on a large corpus of dialogues coll...
Modeling, Prediction, and Control of Heating Temperature for Tube Billet
Yachun Mao
2015-01-01
Full Text Available Annular furnaces have multivariate, nonlinear, large time lag, and cross coupling characteristics. The prediction and control of the exit temperature of a tube billet are important but difficult. We establish a prediction model for the final temperature of a tube billet through OS-ELM-DRPLS method. We address the complex production characteristics, integrate the advantages of PLS and ELM algorithms in establishing linear and nonlinear models, and consider model update and data lag. Based on the proposed model, we design a prediction control algorithm for tube billet temperature. The algorithm is validated using the practical production data of Baosteel Co., Ltd. Results show that the model achieves the precision required in industrial applications. The temperature of the tube billet can be controlled within the required temperature range through compensation control method.
Experimental study on prediction model for maximum rebound ratio
LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong
2007-01-01
The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.
Groundwater Level Prediction using M5 Model Trees
Nalarajan, Nitha Ayinippully; Mohandas, C.
2015-01-01
Groundwater is an important resource, readily available and having high economic value and social benefit. Recently, it had been considered a dependable source of uncontaminated water. During the past two decades, increased rate of extraction and other greedy human actions have resulted in the groundwater crisis, both qualitatively and quantitatively. Under prevailing circumstances, the availability of predicted groundwater levels increase the importance of this valuable resource, as an aid in the planning of groundwater resources. For this purpose, data-driven prediction models are widely used in the present day world. M5 model tree (MT) is a popular soft computing method emerging as a promising method for numeric prediction, producing understandable models. The present study discusses the groundwater level predictions using MT employing only the historical groundwater levels from a groundwater monitoring well. The results showed that MT can be successively used for forecasting groundwater levels.
A Fusion Model for CPU Load Prediction in Cloud Computing
Dayu Xu
2013-11-01
Full Text Available Load prediction plays a key role in cost-optimal resource allocation and datacenter energy saving. In this paper, we use real-world traces from Cloud platform and propose a fusion model to forecast the future CPU loads. First, long CPU load time series data are divided into short sequences with same length from the historical data on the basis of cloud control cycle. Then we use kernel fuzzy c-means clustering algorithm to put the subsequences into different clusters. For each cluster, with current load sequence, a genetic algorithm optimized wavelet Elman neural network prediction model is exploited to predict the CPU load in next time interval. Finally, we obtain the optimal cloud computing CPU load prediction results from the cluster and its corresponding predictor with minimum forecasting error. Experimental results show that our algorithm performs better than other models reported in previous works.
Disease prediction models and operational readiness.
Courtney D Corley
Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology
Model Predictive Control based on Finite Impulse Response Models
Prasath, Guru; Jørgensen, John Bagterp
2008-01-01
We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...
A predictive fatigue life model for anodized 7050 aluminium alloy
Chaussumier, Michel; Mabru, Catherine; Shahzad, Majid; Chieragatti, Rémy; Rezaï-Aria, Farhad
2013-01-01
International audience; The objective of this study is to predict fatigue life of anodized 7050 aluminum alloy specimens. In the case of anodized 7050-T7451 alloy, fractographic observations of fatigue tested specimens showed that pickling pits were the predominant sites for crack nucleation and subsequent failure. It has been shown that fatigue failure was favored by the presence of multiple cracks. From these experimental results, a fatigue life predictive model has been developed including...
Predicting nucleic acid binding interfaces from structural models of proteins.
Dror, Iris; Shazman, Shula; Mukherjee, Srayanta; Zhang, Yang; Glaser, Fabian; Mandel-Gutfreund, Yael
2012-02-01
The function of DNA- and RNA-binding proteins can be inferred from the characterization and accurate prediction of their binding interfaces. However, the main pitfall of various structure-based methods for predicting nucleic acid binding function is that they are all limited to a relatively small number of proteins for which high-resolution three-dimensional structures are available. In this study, we developed a pipeline for extracting functional electrostatic patches from surfaces of protein structural models, obtained using the I-TASSER protein structure predictor. The largest positive patches are extracted from the protein surface using the patchfinder algorithm. We show that functional electrostatic patches extracted from an ensemble of structural models highly overlap the patches extracted from high-resolution structures. Furthermore, by testing our pipeline on a set of 55 known nucleic acid binding proteins for which I-TASSER produces high-quality models, we show that the method accurately identifies the nucleic acids binding interface on structural models of proteins. Employing a combined patch approach we show that patches extracted from an ensemble of models better predicts the real nucleic acid binding interfaces compared with patches extracted from independent models. Overall, these results suggest that combining information from a collection of low-resolution structural models could be a valuable approach for functional annotation. We suggest that our method will be further applicable for predicting other functional surfaces of proteins with unknown structure. Copyright © 2011 Wiley Periodicals, Inc.
Predicting Career Advancement with Structural Equation Modelling
Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia
2012-01-01
Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…
Predicting Career Advancement with Structural Equation Modelling
Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia
2012-01-01
Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…
Electrostatic ion thrusters - towards predictive modeling
Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)
2014-02-15
The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Gas explosion prediction using CFD models
Niemann-Delius, C.; Okafor, E. [RWTH Aachen Univ. (Germany); Buhrow, C. [TU Bergakademie Freiberg Univ. (Germany)
2006-07-15
A number of CFD models are currently available to model gaseous explosions in complex geometries. Some of these tools allow the representation of complex environments within hydrocarbon production plants. In certain explosion scenarios, a correction is usually made for the presence of buildings and other complexities by using crude approximations to obtain realistic estimates of explosion behaviour as can be found when predicting the strength of blast waves resulting from initial explosions. With the advance of computational technology, and greater availability of computing power, computational fluid dynamics (CFD) tools are becoming increasingly available for solving such a wide range of explosion problems. A CFD-based explosion code - FLACS can, for instance, be confidently used to understand the impact of blast overpressures in a plant environment consisting of obstacles such as buildings, structures, and pipes. With its porosity concept representing geometry details smaller than the grid, FLACS can represent geometry well, even when using coarse grid resolutions. The performance of FLACS has been evaluated using a wide range of field data. In the present paper, the concept of computational fluid dynamics (CFD) and its application to gas explosion prediction is presented. Furthermore, the predictive capabilities of CFD-based gaseous explosion simulators are demonstrated using FLACS. Details about the FLACS-code, some extensions made to FLACS, model validation exercises, application, and some results from blast load prediction within an industrial facility are presented. (orig.)
Genetic models of homosexuality: generating testable predictions.
Gavrilets, Sergey; Rice, William R
2006-12-22
Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.
A Study On Distributed Model Predictive Consensus
Keviczky, Tamas
2008-01-01
We investigate convergence properties of a proposed distributed model predictive control (DMPC) scheme, where agents negotiate to compute an optimal consensus point using an incremental subgradient method based on primal decomposition as described in Johansson et al. [2006, 2007]. The objective of the distributed control strategy is to agree upon and achieve an optimal common output value for a group of agents in the presence of constraints on the agent dynamics using local predictive controllers. Stability analysis using a receding horizon implementation of the distributed optimal consensus scheme is performed. Conditions are given under which convergence can be obtained even if the negotiations do not reach full consensus.
Nonconvex model predictive control for commercial refrigeration
Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John
2013-08-01
We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.
Predicting nucleosome positioning using a duration Hidden Markov Model
Widom Jonathan
2010-06-01
Full Text Available Abstract Background The nucleosome is the fundamental packing unit of DNAs in eukaryotic cells. Its detailed positioning on the genome is closely related to chromosome functions. Increasing evidence has shown that genomic DNA sequence itself is highly predictive of nucleosome positioning genome-wide. Therefore a fast software tool for predicting nucleosome positioning can help understanding how a genome's nucleosome organization may facilitate genome function. Results We present a duration Hidden Markov model for nucleosome positioning prediction by explicitly modeling the linker DNA length. The nucleosome and linker models trained from yeast data are re-scaled when making predictions for other species to adjust for differences in base composition. A software tool named NuPoP is developed in three formats for free download. Conclusions Simulation studies show that modeling the linker length distribution and utilizing a base composition re-scaling method both improve the prediction of nucleosome positioning regarding sensitivity and false discovery rate. NuPoP provides a user-friendly software tool for predicting the nucleosome occupancy and the most probable nucleosome positioning map for genomic sequences of any size. When compared with two existing methods, NuPoP shows improved performance in sensitivity.
Brinch, Karoline Sidelmann; Sandberg, Anne; Baudoux, Pierre
2009-01-01
was maintained (maximal relative efficacy [E(max)], 1.0- to 1.3-log reduction in CFU) even though efficacy was inferior to that of extracellular killing (E(max), >4.5-log CFU reduction). Animal studies included a novel use of the mouse peritonitis model, exploiting extra- and intracellular differentiation assays...... concentration. These findings stress the importance of performing studies of extra- and intracellular activity since these features cannot be predicted from traditional MIC and killing kinetic studies. Application of both the THP-1 and the mouse peritonitis models showed that the in vitro results were similar...
Three-model ensemble wind prediction in southern Italy
Torcasio, Rosa Claudia; Federico, Stefano; Calidonna, Claudia Roberta; Avolio, Elenio; Drofa, Oxana; Landi, Tony Christian; Malguzzi, Piero; Buzzi, Andrea; Bonasoni, Paolo
2016-03-01
Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013) three-model ensemble (TME) experiment for wind prediction is considered. The models employed, run operationally at National Research Council - Institute of Atmospheric Sciences and Climate (CNR-ISAC), are RAMS (Regional Atmospheric Modelling System), BOLAM (BOlogna Limited Area Model), and MOLOCH (MOdello LOCale in H coordinates). The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System). Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System) of the ECMWF (European Centre for Medium-Range Weather Forecast) for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.
Improving Saliency Models by Predicting Human Fixation Patches
Dubey, Rachit
2015-04-16
There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.
Patrick, Christopher J; Yuan, Lester L
2017-07-01
Flow alteration is widespread in streams, but current understanding of the effects of differences in flow characteristics on stream biological communities is incomplete. We tested hypotheses about the effect of variation in hydrology on stream communities by using generalized additive models to relate watershed information to the values of different flow metrics at gauged sites. Flow models accounted for 54-80% of the spatial variation in flow metric values among gauged sites. We then used these models to predict flow metrics in 842 ungauged stream sites in the mid-Atlantic United States that were sampled for fish, macroinvertebrates, and environmental covariates. Fish and macroinvertebrate assemblages were characterized in terms of a suite of metrics that quantified aspects of community composition, diversity, and functional traits that were expected to be associated with differences in flow characteristics. We related modeled flow metrics to biological metrics in a series of stressor-response models. Our analyses identified both drying and base flow instability as explaining 30-50% of the observed variability in fish and invertebrate community composition. Variations in community composition were related to variations in the prevalence of dispersal traits in invertebrates and trophic guilds in fish. The results demonstrate that we can use statistical models to predict hydrologic conditions at bioassessment sites, which, in turn, we can use to estimate relationships between flow conditions and biological characteristics. This analysis provides an approach to quantify the effects of spatial variation in flow metrics using readily available biomonitoring data. © 2017 by the Ecological Society of America.
Performance model to predict overall defect density
J Venkatesh
2012-08-01
Full Text Available Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.
Neuro-fuzzy modeling in bankruptcy prediction
Vlachos D.
2003-01-01
Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.
Outcome Prediction in Mathematical Models of Immune Response to Infection.
Manuel Mai
Full Text Available Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.
Pressure prediction model for compression garment design.
Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q
2010-01-01
Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.
A Prediction Model of MF Radiation in Environmental Assessment
HE-SHAN GE; YAN-FENG HONG
2006-01-01
Objective To predict the impact of MF radiation on human health.Methods The vertical distribution of field intensity was estimated by analogism on the basis of measured values from simulation measurement. Results A kind of analogism on the basis of geometric proportion decay pattern is put forward in the essay. It showed that with increasing of height the field intensity increased according to geometric proportion law. Conclusion This geometric proportion prediction model can be used to estimate the impact of MF radiation on inhabited environment, and can act as a reference pattern in predicting the environmental impact level of MF radiation.
Statistical assessment of predictive modeling uncertainty
Barzaghi, Riccardo; Marotta, Anna Maria
2017-04-01
When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.
Stability of theoretical model for catastrophic weather prediction
SHI Wei-hui; WANG Yue-peng
2007-01-01
Stability related to theoretical model for catastrophic weather prediction,which includes non-hydrostatic perfect elastic model and anelastic model, is discussed and analyzed in detail. It is proved that non-hydrostatic perfect elastic equations set is stable in the class of infinitely differentiable function. However, for the anelastic equations set, its continuity equation is changed in form because of the particular hypothesis for fluid, so "the matching consisting of both viscosity coefficient and incompressible assumption" appears, thereby the most important equations set of this class in practical prediction shows the same instability in topological property as Navier-Stokes equation,which should be avoided first in practical numerical prediction. In light of this, the referenced suggestions to amend the applied model are finally presented.
A kinetic model for predicting biodegradation.
Dimitrov, S; Pavlov, T; Nedelcheva, D; Reuschenbach, P; Silvani, M; Bias, R; Comber, M; Low, L; Lee, C; Parkerton, T; Mekenyan, O
2007-01-01
Biodegradation plays a key role in the environmental risk assessment of organic chemicals. The need to assess biodegradability of a chemical for regulatory purposes supports the development of a model for predicting the extent of biodegradation at different time frames, in particular the extent of ultimate biodegradation within a '10 day window' criterion as well as estimating biodegradation half-lives. Conceptually this implies expressing the rate of catabolic transformations as a function of time. An attempt to correlate the kinetics of biodegradation with molecular structure of chemicals is presented. A simplified biodegradation kinetic model was formulated by combining the probabilistic approach of the original formulation of the CATABOL model with the assumption of first order kinetics of catabolic transformations. Nonlinear regression analysis was used to fit the model parameters to OECD 301F biodegradation kinetic data for a set of 208 chemicals. The new model allows the prediction of biodegradation multi-pathways, primary and ultimate half-lives and simulation of related kinetic biodegradation parameters such as biological oxygen demand (BOD), carbon dioxide production, and the nature and amount of metabolites as a function of time. The model may also be used for evaluating the OECD ready biodegradability potential of a chemical within the '10-day window' criterion.
Multi-model ensemble hydrologic prediction and uncertainties analysis
S. Jiang
2014-09-01
Full Text Available Modelling uncertainties (i.e. input errors, parameter uncertainties and model structural errors inevitably exist in hydrological prediction. A lot of recent attention has focused on these, of which input error modelling, parameter optimization and multi-model ensemble strategies are the three most popular methods to demonstrate the impacts of modelling uncertainties. In this paper the Xinanjiang model, the Hybrid rainfall–runoff model and the HYMOD model were applied to the Mishui Basin, south China, for daily streamflow ensemble simulation and uncertainty analysis. The three models were first calibrated by two parameter optimization algorithms, namely, the Shuffled Complex Evolution method (SCE-UA and the Shuffled Complex Evolution Metropolis method (SCEM-UA; next, the input uncertainty was accounted for by introducing a normally-distributed error multiplier; then, the simulation sets calculated from the three models were combined by Bayesian model averaging (BMA. The results show that both these parameter optimization algorithms generate good streamflow simulations; specifically the SCEM-UA can imply parameter uncertainty and give the posterior distribution of the parameters. Considering the precipitation input uncertainty, the streamflow simulation precision does not improve very much. While the BMA combination not only improves the streamflow prediction precision, it also gives quantitative uncertainty bounds for the simulation sets. The SCEM-UA calculated prediction interval is better than the SCE-UA calculated one. These results suggest that considering the model parameters' uncertainties and doing multi-model ensemble simulations are very practical for streamflow prediction and flood forecasting, from which more precision prediction and more reliable uncertainty bounds can be generated.
Disease Prediction Models and Operational Readiness
Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.
2014-03-19
INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the
Nonlinear model predictive control theory and algorithms
Grüne, Lars
2017-01-01
This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...
Embryo quality predictive models based on cumulus cells gene expression
Devjak R
2016-06-01
Full Text Available Since the introduction of in vitro fertilization (IVF in clinical practice of infertility treatment, the indicators for high quality embryos were investigated. Cumulus cells (CC have a specific gene expression profile according to the developmental potential of the oocyte they are surrounding, and therefore, specific gene expression could be used as a biomarker. The aim of our study was to combine more than one biomarker to observe improvement in prediction value of embryo development. In this study, 58 CC samples from 17 IVF patients were analyzed. This study was approved by the Republic of Slovenia National Medical Ethics Committee. Gene expression analysis [quantitative real time polymerase chain reaction (qPCR] for five genes, analyzed according to embryo quality level, was performed. Two prediction models were tested for embryo quality prediction: a binary logistic and a decision tree model. As the main outcome, gene expression levels for five genes were taken and the area under the curve (AUC for two prediction models were calculated. Among tested genes, AMHR2 and LIF showed significant expression difference between high quality and low quality embryos. These two genes were used for the construction of two prediction models: the binary logistic model yielded an AUC of 0.72 ± 0.08 and the decision tree model yielded an AUC of 0.73 ± 0.03. Two different prediction models yielded similar predictive power to differentiate high and low quality embryos. In terms of eventual clinical decision making, the decision tree model resulted in easy-to-interpret rules that are highly applicable in clinical practice.
Predicting soil acidification trends at Plynlimon using the SAFE model
B. Reynolds
1997-01-01
Full Text Available The SAFE model has been applied to an acid grassland site, located on base-poor stagnopodzol soils derived from Lower Palaeozoic greywackes. The model predicts that acidification of the soil has occurred in response to increased acid deposition following the industrial revolution. Limited recovery is predicted following the decline in sulphur deposition during the mid to late 1970s. Reducing excess sulphur and NOx deposition in 1998 to 40% and 70% of 1980 levels results in further recovery but soil chemical conditions (base saturation, soil water pH and ANC do not return to values predicted in pre-industrial times. The SAFE model predicts that critical loads (expressed in terms of the (Ca+Mg+K:Alcrit ratio for six vegetation species found in acid grassland communities are not exceeded despite the increase in deposited acidity following the industrial revolution. The relative growth response of selected vegetation species characteristic of acid grassland swards has been predicted using a damage function linking growth to soil solution base cation to aluminium ratio. The results show that very small growth reductions can be expected for 'acid tolerant' plants growing in acid upland soils. For more sensitive species such as Holcus lanatus, SAFE predicts that growth would have been reduced by about 20% between 1951 and 1983, when acid inputs were greatest. Recovery to c. 90% of normal growth (under laboratory conditions is predicted as acidic inputs decline.
Predictive Modeling in Actinide Chemistry and Catalysis
Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-16
These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.
A predictive model of music preference using pairwise comparisons
Jensen, Bjørn Sand; Gallego, Javier Saez; Larsen, Jan
2012-01-01
Music recommendation is an important aspect of many streaming services and multi-media systems, however, it is typically based on so-called collaborative filtering methods. In this paper we consider the recommendation task from a personal viewpoint and examine to which degree music preference can...... be elicited and predicted using simple and robust queries such as pairwise comparisons. We propose to model - and in turn predict - the pairwise music preference using a very flexible model based on Gaussian Process priors for which we describe the required inference. We further propose a specific covariance...... function and evaluate the predictive performance on a novel dataset. In a recommendation style setting we obtain a leave-one-out accuracy of 74% compared to 50% with random predictions, showing potential for further refinement and evaluation....
The ARIC predictive model reliably predicted risk of type II diabetes in Asian populations
Chin Calvin
2012-04-01
Full Text Available Abstract Background Identification of high-risk individuals is crucial for effective implementation of type 2 diabetes mellitus prevention programs. Several studies have shown that multivariable predictive functions perform as well as the 2-hour post-challenge glucose in identifying these high-risk individuals. The performance of these functions in Asian populations, where the rise in prevalence of type 2 diabetes mellitus is expected to be the greatest in the next several decades, is relatively unknown. Methods Using data from three Asian populations in Singapore, we compared the performance of three multivariate predictive models in terms of their discriminatory power and calibration quality: the San Antonio Health Study model, Atherosclerosis Risk in Communities model and the Framingham model. Results The San Antonio Health Study and Atherosclerosis Risk in Communities models had better discriminative powers than using only fasting plasma glucose or the 2-hour post-challenge glucose. However, the Framingham model did not perform significantly better than fasting glucose or the 2-hour post-challenge glucose. All published models suffered from poor calibration. After recalibration, the Atherosclerosis Risk in Communities model achieved good calibration, the San Antonio Health Study model showed a significant lack of fit in females and the Framingham model showed a significant lack of fit in both females and males. Conclusions We conclude that adoption of the ARIC model for Asian populations is feasible and highly recommended when local prospective data is unavailable.
Real-Time Optimization for Economic Model Predictive Control
Sokoler, Leo Emil; Edlund, Kristian; Frison, Gianluca
2012-01-01
In this paper, we develop an efficient homogeneous and self-dual interior-point method for the linear programs arising in economic model predictive control. To exploit structure in the optimization problems, the algorithm employs a highly specialized Riccati iteration procedure. Simulations show...
Probabilistic prediction models for aggregate quarry siting
Robinson, G.R.; Larkins, P.M.
2007-01-01
Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.
Predicting Footbridge Response using Stochastic Load Models
Pedersen, Lars; Frier, Christian
2013-01-01
Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing s...... as it pinpoints which decisions to be concerned about when the goal is to predict footbridge response. The studies involve estimating footbridge responses using Monte-Carlo simulations and focus is on estimating vertical structural response to single person loading....
Predictive In Vivo Models for Oncology.
Behrens, Diana; Rolff, Jana; Hoffmann, Jens
2016-01-01
Experimental oncology research and preclinical drug development both substantially require specific, clinically relevant in vitro and in vivo tumor models. The increasing knowledge about the heterogeneity of cancer requested a substantial restructuring of the test systems for the different stages of development. To be able to cope with the complexity of the disease, larger panels of patient-derived tumor models have to be implemented and extensively characterized. Together with individual genetically engineered tumor models and supported by core functions for expression profiling and data analysis, an integrated discovery process has been generated for predictive and personalized drug development.Improved “humanized” mouse models should help to overcome current limitations given by xenogeneic barrier between humans and mice. Establishment of a functional human immune system and a corresponding human microenvironment in laboratory animals will strongly support further research.Drug discovery, systems biology, and translational research are moving closer together to address all the new hallmarks of cancer, increase the success rate of drug development, and increase the predictive value of preclinical models.
Statistical Seasonal Sea Surface based Prediction Model
Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima
2014-05-01
The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.
Curtis, Gary P.; Lu, Dan; Ye, Ming
2015-01-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the
Predictability of Shanghai Stock Market by Agent-based Mix-game Model
Gou, C
2005-01-01
This paper reports the effort of using agent-based mix-game model to predict financial time series. It introduces the prediction methodology by means of mix-game model and gives an example of its application to forecasting Shanghai Index. The results show that this prediction methodology is effective and agent-based mix-game model is a potential good model to predict time series of financial markets.
Mantis: Predicting System Performance through Program Analysis and Modeling
Chun, Byung-Gon; Lee, Sangmin; Maniatis, Petros; Naik, Mayur
2010-01-01
We present Mantis, a new framework that automatically predicts program performance with high accuracy. Mantis integrates techniques from programming language and machine learning for performance modeling, and is a radical departure from traditional approaches. Mantis extracts program features, which are information about program execution runs, through program instrumentation. It uses machine learning techniques to select features relevant to performance and creates prediction models as a function of the selected features. Through program analysis, it then generates compact code slices that compute these feature values for prediction. Our evaluation shows that Mantis can achieve more than 93% accuracy with less than 10% training data set, which is a significant improvement over models that are oblivious to program features. The system generates code slices that are cheap to compute feature values.
A prediction model for Clostridium difficile recurrence
Francis D. LaBarbera
2015-02-01
Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.
Gamma-Ray Pulsars Models and Predictions
Harding, A K
2001-01-01
Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...
Ground Motion Prediction Models for Caucasus Region
Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino
2016-04-01
Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.
Modeling and Prediction of Krueger Device Noise
Guo, Yueping; Burley, Casey L.; Thomas, Russell H.
2016-01-01
This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.
A generative model for predicting terrorist incidents
Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger
2017-05-01
A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations
Goldhaber, Dan; Chaplin, Duncan Dunbar
2015-01-01
In an influential paper, Jesse Rothstein (2010) shows that standard value-added models (VAMs) suggest implausible and large future teacher effects on past student achievement. This is the basis of a falsification test that "appears" to indicate bias in typical VAM estimates of teacher contributions to student learning on standardized…
Econometric models for predicting confusion crop ratios
Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)
1979-01-01
Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.
Hierarchical Neural Regression Models for Customer Churn Prediction
Golshan Mohammadi
2013-01-01
Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.
A Novel Trigger Model for Sales Prediction with Data Mining Techniques
Wenjie Huang
2015-05-01
Full Text Available Previous research on sales prediction has always used a single prediction model. However, no single model can perform the best for all kinds of merchandise. Accurate prediction results for just one commodity are meaningless to sellers. A general prediction for all commodities is needed. This paper illustrates a novel trigger system that can match certain kinds of commodities with a prediction model to give better prediction results for different kinds of commodities. We find some related factors for classification. Several classical prediction models are included as basic models for classification. We compared the results of the trigger model with other single models. The results show that the accuracy of the trigger model is better than that of a single model. This has implications for business in that sellers can utilize the proposed system to effectively predict the sales of several commodities.
Optimal feedback scheduling of model predictive controllers
Pingfang ZHOU; Jianying XIE; Xiaolong DENG
2006-01-01
Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance, thus obtaining the predictability in time. Optimal feedback scheduling (FS-CBS) of a set of MPC tasks is presented to maximize the global control performance subject to limited processor time. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The constraints in the FSCBS guarantee scheduler of the total task set and stability of each component. The FS-CBS is shown robust against the variation of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.
Objective calibration of numerical weather prediction models
Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.
2017-07-01
Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.
Model predictive control of MSMPR crystallizers
Moldoványi, Nóra; Lakatos, Béla G.; Szeifert, Ferenc
2005-02-01
A multi-input-multi-output (MIMO) control problem of isothermal continuous crystallizers is addressed in order to create an adequate model-based control system. The moment equation model of mixed suspension, mixed product removal (MSMPR) crystallizers that forms a dynamical system is used, the state of which is represented by the vector of six variables: the first four leading moments of the crystal size, solute concentration and solvent concentration. Hence, the time evolution of the system occurs in a bounded region of the six-dimensional phase space. The controlled variables are the mean size of the grain; the crystal size-distribution and the manipulated variables are the input concentration of the solute and the flow rate. The controllability and observability as well as the coupling between the inputs and the outputs was analyzed by simulation using the linearized model. It is shown that the crystallizer is a nonlinear MIMO system with strong coupling between the state variables. Considering the possibilities of the model reduction, a third-order model was found quite adequate for the model estimation in model predictive control (MPC). The mean crystal size and the variance of the size distribution can be nearly separately controlled by the residence time and the inlet solute concentration, respectively. By seeding, the controllability of the crystallizer increases significantly, and the overshoots and the oscillations become smaller. The results of the controlling study have shown that the linear MPC is an adaptable and feasible controller of continuous crystallizers.
An Anisotropic Hardening Model for Springback Prediction
Zeng, Danielle; Xia, Z. Cedric
2005-08-01
As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.
Quantitative modelling in cognitive ergonomics: predicting signals passed at danger
Moray, Neville; Groeger, John; Stanton, Neville
2016-01-01
This paper shows how to combine field observations, experimental data, and mathematical modeling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example we consider a major railway accident. In 1999 a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, "black box" data, and accident and engineering reports, to construct a case history of the accident. We show how t...
2007-01-01
<正>As predicted by Zhou Baoguo, Director of Lead and Zinc Branch of China Non-ferrous Metals Industry Association, in 2007, Chinese lead and zinc production has maintained the increase rate of 15% and 16% respectively, with lead output reaching the amount of 3.15 million tons and zinc output reaching the
Predictive modelling of ferroelectric tunnel junctions
Velev, Julian P.; Burton, John D.; Zhuravlev, Mikhail Ye; Tsymbal, Evgeny Y.
2016-05-01
Ferroelectric tunnel junctions combine the phenomena of quantum-mechanical tunnelling and switchable spontaneous polarisation of a nanometre-thick ferroelectric film into novel device functionality. Switching the ferroelectric barrier polarisation direction produces a sizable change in resistance of the junction—a phenomenon known as the tunnelling electroresistance effect. From a fundamental perspective, ferroelectric tunnel junctions and their version with ferromagnetic electrodes, i.e., multiferroic tunnel junctions, are testbeds for studying the underlying mechanisms of tunnelling electroresistance as well as the interplay between electric and magnetic degrees of freedom and their effect on transport. From a practical perspective, ferroelectric tunnel junctions hold promise for disruptive device applications. In a very short time, they have traversed the path from basic model predictions to prototypes for novel non-volatile ferroelectric random access memories with non-destructive readout. This remarkable progress is to a large extent driven by a productive cycle of predictive modelling and innovative experimental effort. In this review article, we outline the development of the ferroelectric tunnel junction concept and the role of theoretical modelling in guiding experimental work. We discuss a wide range of physical phenomena that control the functional properties of ferroelectric tunnel junctions and summarise the state-of-the-art achievements in the field.
Simple predictions from multifield inflationary models.
Easther, Richard; Frazer, Jonathan; Peiris, Hiranya V; Price, Layne C
2014-04-25
We explore whether multifield inflationary models make unambiguous predictions for fundamental cosmological observables. Focusing on N-quadratic inflation, we numerically evaluate the full perturbation equations for models with 2, 3, and O(100) fields, using several distinct methods for specifying the initial values of the background fields. All scenarios are highly predictive, with the probability distribution functions of the cosmological observables becoming more sharply peaked as N increases. For N=100 fields, 95% of our Monte Carlo samples fall in the ranges ns∈(0.9455,0.9534), α∈(-9.741,-7.047)×10-4, r∈(0.1445,0.1449), and riso∈(0.02137,3.510)×10-3 for the spectral index, running, tensor-to-scalar ratio, and isocurvature-to-adiabatic ratio, respectively. The expected amplitude of isocurvature perturbations grows with N, raising the possibility that many-field models may be sensitive to postinflationary physics and suggesting new avenues for testing these scenarios.
Predictions of models for environmental radiological assessment
Peres, Sueli da Silva; Lauria, Dejanira da Costa, E-mail: suelip@ird.gov.br, E-mail: dejanira@irg.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Servico de Avaliacao de Impacto Ambiental, Rio de Janeiro, RJ (Brazil); Mahler, Claudio Fernando [Coppe. Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa de Engenharia, Universidade Federal do Rio de Janeiro (UFRJ) - Programa de Engenharia Civil, RJ (Brazil)
2011-07-01
In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for {sup 137}Cs and {sup 60}Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)
Consumer Choice Prediction: Artificial Neural Networks versus Logistic Models
Christopher Gan
2005-01-01
Full Text Available Conventional econometric models, such as discriminant analysis and logistic regression have been used to predict consumer choice. However, in recent years, there has been a growing interest in applying artificial neural networks (ANN to analyse consumer behaviour and to model the consumer decision-making process. The purpose of this paper is to empirically compare the predictive power of the probability neural network (PNN, a special class of neural networks and a MLFN with a logistic model on consumers choices between electronic banking and non-electronic banking. Data for this analysis was obtained through a mail survey sent to 1,960 New Zealand households. The questionnaire gathered information on the factors consumers use to decide between electronic banking versus non-electronic banking. The factors include service quality dimensions, perceived risk factors, user input factors, price factors, service product characteristics and individual factors. In addition, demographic variables including age, gender, marital status, ethnic background, educational qualification, employment, income and area of residence are considered in the analysis. Empirical results showed that both ANN models (MLFN and PNN exhibit a higher overall percentage correct on consumer choice predictions than the logistic model. Furthermore, the PNN demonstrates to be the best predictive model since it has the highest overall percentage correct and a very low percentage error on both Type I and Type II errors.
Testing the Predictions of the Universal Structured GRB Jet Model
Nakar, E; Guetta, D; Nakar, Ehud; Granot, Jonathan; Guetta, Dafne
2004-01-01
The two leading models for the structure of GRB jets are (1) the uniform jet model, where the energy per solid angle, $\\epsilon$, is roughly constant within some finite half-opening angle, $\\theta_j$, and sharply drops outside of $\\theta_j$, and (2) the universal structured jet (USJ) model, where all GRB jets are intrinsically identical, and $\\epsilon$ drops as the inverse square of the angle from the jet axis. The simplicity of the USJ model gives it a strong predictive power, including a specific prediction for the observed GRB distribution as a function of both the redshift $z$ and the viewing angle $\\theta$ from the jet axis. We show that the current sample of GRBs with known $z$ and estimated $\\theta$ does not agree with the predictions of the USJ model. This can be best seen for a relatively narrow range in $z$, in which the USJ model predicts that most GRBs should be near the upper end of the observed range in $\\theta$, while in the observed sample most GRBs are near the lower end of that range. Since ...
A Modified Model Predictive Control Scheme
Xiao-Bing Hu; Wen-Hua Chen
2005-01-01
In implementations of MPC (Model Predictive Control) schemes, two issues need to be addressed. One is how to enlarge the stability region as much as possible. The other is how to guarantee stability when a computational time limitation exists. In this paper, a modified MPC scheme for constrained linear systems is described. An offline LMI-based iteration process is introduced to expand the stability region. At the same time, a database of feasible control sequences is generated offline so that stability can still be guaranteed in the case of computational time limitations. Simulation results illustrate the effectiveness of this new approach.
Hierarchical Model Predictive Control for Resource Distribution
Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob
2010-01-01
This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....
Explicit model predictive control accuracy analysis
Knyazev, Andrew; Zhu, Peizhen; Di Cairano, Stefano
2015-01-01
Model Predictive Control (MPC) can efficiently control constrained systems in real-time applications. MPC feedback law for a linear system with linear inequality constraints can be explicitly computed off-line, which results in an off-line partition of the state space into non-overlapped convex regions, with affine control laws associated to each region of the partition. An actual implementation of this explicit MPC in low cost micro-controllers requires the data to be "quantized", i.e. repre...
Predicting functional brain ROIs via fiber shape models.
Zhang, Tuo; Guo, Lei; Li, Kaiming; Zhu, Dajing; Cui, Guangbin; Liu, Tianming
2011-01-01
Study of structural and functional connectivities of the human brain has received significant interest and effort recently. A fundamental question arises when attempting to measure the structural and/or functional connectivities of specific brain networks: how to best identify possible Regions of Interests (ROIs)? In this paper, we present a novel ROI prediction framework that localizes ROIs in individual brains based on learned fiber shape models from multimodal task-based fMRI and diffusion tensor imaging (DTI) data. In the training stage, ROIs are identified as activation peaks in task-based fMRI data. Then, shape models of white matter fibers emanating from these functional ROIs are learned. In addition, ROIs' location distribution model is learned to be used as an anatomical constraint. In the prediction stage, functional ROIs are predicted in individual brains based on DTI data. The ROI prediction is formulated and solved as an energy minimization problem, in which the two learned models are used as energy terms. Our experiment results show that the average ROI prediction error is 3.45 mm, in comparison with the benchmark data provided by working memory task-based fMRI. Promising results were also obtained on the ADNI-2 longitudinal DTI dataset.
Charge transport model to predict intrinsic reliability for dielectric materials
Ogden, Sean P. [Howard P. Isermann Department of Chemical and Biological Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); GLOBALFOUNDRIES, 400 Stonebreak Rd. Ext., Malta, New York 12020 (United States); Borja, Juan; Plawsky, Joel L., E-mail: plawsky@rpi.edu; Gill, William N. [Howard P. Isermann Department of Chemical and Biological Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Lu, T.-M. [Department of Physics, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Yeap, Kong Boon [GLOBALFOUNDRIES, 400 Stonebreak Rd. Ext., Malta, New York 12020 (United States)
2015-09-28
Several lifetime models, mostly empirical in nature, are used to predict reliability for low-k dielectrics used in integrated circuits. There is a dispute over which model provides the most accurate prediction for device lifetime at operating conditions. As a result, there is a need to transition from the use of these largely empirical models to one built entirely on theory. Therefore, a charge transport model was developed to predict the device lifetime of low-k interconnect systems. The model is based on electron transport and donor-type defect formation. Breakdown occurs when a critical defect concentration accumulates, resulting in electron tunneling and the emptying of positively charged traps. The enhanced local electric field lowers the barrier for electron injection into the dielectric, causing a positive feedforward failure. The charge transport model is able to replicate experimental I-V and I-t curves, capturing the current decay at early stress times and the rapid current increase at failure. The model is based on field-driven and current-driven failure mechanisms and uses a minimal number of parameters. All the parameters have some theoretical basis or have been measured experimentally and are not directly used to fit the slope of the time-to-failure versus applied field curve. Despite this simplicity, the model is able to accurately predict device lifetime for three different sources of experimental data. The simulation's predictions at low fields and very long lifetimes show that the use of a single empirical model can lead to inaccuracies in device reliability.
In silico modeling to predict drug-induced phospholipidosis
Choi, Sydney S.; Kim, Jae S.; Valerio, Luis G., E-mail: luis.valerio@fda.hhs.gov; Sadrieh, Nakissa
2013-06-01
Drug-induced phospholipidosis (DIPL) is a preclinical finding during pharmaceutical drug development that has implications on the course of drug development and regulatory safety review. A principal characteristic of drugs inducing DIPL is known to be a cationic amphiphilic structure. This provides evidence for a structure-based explanation and opportunity to analyze properties and structures of drugs with the histopathologic findings for DIPL. In previous work from the FDA, in silico quantitative structure–activity relationship (QSAR) modeling using machine learning approaches has shown promise with a large dataset of drugs but included unconfirmed data as well. In this study, we report the construction and validation of a battery of complementary in silico QSAR models using the FDA's updated database on phospholipidosis, new algorithms and predictive technologies, and in particular, we address high performance with a high-confidence dataset. The results of our modeling for DIPL include rigorous external validation tests showing 80–81% concordance. Furthermore, the predictive performance characteristics include models with high sensitivity and specificity, in most cases above ≥ 80% leading to desired high negative and positive predictivity. These models are intended to be utilized for regulatory toxicology applied science needs in screening new drugs for DIPL. - Highlights: • New in silico models for predicting drug-induced phospholipidosis (DIPL) are described. • The training set data in the models is derived from the FDA's phospholipidosis database. • We find excellent predictivity values of the models based on external validation. • The models can support drug screening and regulatory decision-making on DIPL.
Critical conceptualism in environmental modeling and prediction.
Christakos, G
2003-10-15
Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river.
Predictive Capability Maturity Model for computational modeling and simulation.
Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.
2007-10-01
The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.
Paulsen, Jane S.; Long, Jeffrey D.; Johnson, Hans J.; Aylward, Elizabeth H.; Ross, Christopher A.; Williams, Janet K.; Nance, Martha A.; Erwin, Cheryl J.; Westervelt, Holly J.; Harrington, Deborah L.; Bockholt, H. Jeremy; Zhang, Ying; McCusker, Elizabeth A.; Chiu, Edmond M.; Panegyres, Peter K.
2014-01-01
There is growing consensus that intervention and treatment of Huntington disease (HD) should occur at the earliest stage possible. Various early-intervention methods for this fatal neurodegenerative disease have been identified, but preventive clinical trials for HD are limited by a lack of knowledge of the natural history of the disease and a dearth of appropriate outcome measures. Objectives of the current study are to document the natural history of premanifest HD progression in the largest cohort ever studied and to develop a battery of imaging and clinical markers of premanifest HD progression that can be used as outcome measures in preventive clinical trials. Neurobiological predictors of Huntington’s disease is a 32-site, international, observational study of premanifest HD, with annual examination of 1013 participants with premanifest HD and 301 gene-expansion negative controls between 2001 and 2012. Findings document 39 variables representing imaging, motor, cognitive, functional, and psychiatric domains, showing different rates of decline between premanifest HD and controls. Required sample size and models of premanifest HD are presented to inform future design of clinical and preclinical research. Preventive clinical trials in premanifest HD with participants who have a medium or high probability of motor onset are calculated to be as resource-effective as those conducted in diagnosed HD and could interrupt disease 7–12 years earlier. Methods and measures for preventive clinical trials in premanifest HD more than a dozen years from motor onset are also feasible. These findings represent the most thorough documentation of a clinical battery for experimental therapeutics in stages of premanifest HD, the time period for which effective intervention may provide the most positive possible outcome for patients and their families affected by this devastating disease. PMID:24795630
Boolean network model predicts knockout mutant phenotypes of fission yeast.
Maria I Davidich
Full Text Available BOOLEAN NETWORKS (OR: networks of switches are extremely simple mathematical models of biochemical signaling networks. Under certain circumstances, Boolean networks, despite their simplicity, are capable of predicting dynamical activation patterns of gene regulatory networks in living cells. For example, the temporal sequence of cell cycle activation patterns in yeasts S. pombe and S. cerevisiae are faithfully reproduced by Boolean network models. An interesting question is whether this simple model class could also predict a more complex cellular phenomenology as, for example, the cell cycle dynamics under various knockout mutants instead of the wild type dynamics, only. Here we show that a Boolean network model for the cell cycle control network of yeast S. pombe correctly predicts viability of a large number of known mutants. So far this had been left to the more detailed differential equation models of the biochemical kinetics of the yeast cell cycle network and was commonly thought to be out of reach for models as simplistic as Boolean networks. The new results support our vision that Boolean networks may complement other mathematical models in systems biology to a larger extent than expected so far, and may fill a gap where simplicity of the model and a preference for an overall dynamical blueprint of cellular regulation, instead of biochemical details, are in the focus.
Boolean Network Model Predicts Knockout Mutant Phenotypes of Fission Yeast
Davidich, Maria I.; Bornholdt, Stefan
2013-01-01
Boolean networks (or: networks of switches) are extremely simple mathematical models of biochemical signaling networks. Under certain circumstances, Boolean networks, despite their simplicity, are capable of predicting dynamical activation patterns of gene regulatory networks in living cells. For example, the temporal sequence of cell cycle activation patterns in yeasts S. pombe and S. cerevisiae are faithfully reproduced by Boolean network models. An interesting question is whether this simple model class could also predict a more complex cellular phenomenology as, for example, the cell cycle dynamics under various knockout mutants instead of the wild type dynamics, only. Here we show that a Boolean network model for the cell cycle control network of yeast S. pombe correctly predicts viability of a large number of known mutants. So far this had been left to the more detailed differential equation models of the biochemical kinetics of the yeast cell cycle network and was commonly thought to be out of reach for models as simplistic as Boolean networks. The new results support our vision that Boolean networks may complement other mathematical models in systems biology to a larger extent than expected so far, and may fill a gap where simplicity of the model and a preference for an overall dynamical blueprint of cellular regulation, instead of biochemical details, are in the focus. PMID:24069138
A Predictive Maintenance Model for Railway Tracks
Li, Rui; Wen, Min; Salling, Kim Bang
2015-01-01
For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euro per km per year [1]. Aiming to reduce such maintenance expenditure, this paper...... presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...
Unascertained measurement classifying model of goaf collapse prediction
DONG Long-jun; PENG Gang-jian; FU Yu-hua; BAI Yun-fei; LIU You-fang
2008-01-01
Based on optimized forecast method of unascertained classifying, a unascertained measurement classifying model (UMC) to predict mining induced goaf collapse was established. The discriminated factors of the model are influential factors including overburden layer type, overburden layer thickness, the complex degree of geologic structure,the inclination angle of coal bed, volume rate of the cavity region, the vertical goaf depth from the surface and space superposition layer of the goaf region. Unascertained measurement (UM) function of each factor was calculated. The unascertained measurement to indicate the classification center and the grade of waiting forecast sample was determined by the UM distance between the synthesis index of waiting forecast samples and index of every classification. The training samples were tested by the established model, and the correct rate is 100%. Furthermore, the seven waiting forecast samples were predicted by the UMC model. The results show that the forecast results are fully consistent with the actual situation.
A predictive fitness model for influenza
Łuksza, Marta; Lässig, Michael
2014-03-01
The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.
Predictive Model of Radiative Neutrino Masses
Babu, K S
2013-01-01
We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z_4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: The hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with \\delta_{CP} = \\pi; and the effective mass in neutrinoless double beta decay lies in a narrow range, m_{\\beta \\beta} = (17.6 - 18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tan\\beta, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The non-standard neutral Higgs bosons, if t...
A predictive model for dimensional errors in fused deposition modeling
Stolfi, A.
2015-01-01
This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...
Neural Network Based Model for Predicting Housing Market Performance
Ahmed Khalafallah
2008-01-01
The United States real estate market is currently facing its worst hit in two decades due to the slowdown of housing sales. The most affected by this decline are real estate investors and home develop-ers who are currently struggling to break-even financially on their investments. For these investors, it is of utmost importance to evaluate the current status of the market and predict its performance over the short-term in order to make appropriate financial decisions. This paper presents the development of artificial neu-ral network based models to support real estate investors and home developers in this critical task. The pa-per describes the decision variables, design methodology, and the implementation of these models. The models utilize historical market performance data sets to train the artificial neural networks in order to pre-dict unforeseen future performances. An application example is analyzed to demonstrate the model capabili-ties in analyzing and predicting the market performance. The model testing and validation showed that the error in prediction is in the range between -2% and +2%.
Neural Network Modeling to Predict Shelf Life of Greenhouse Lettuce
Wei-Chin Lin
2009-04-01
Full Text Available Greenhouse-grown butter lettuce (Lactuca sativa L. can potentially be stored for 21 days at constant 0°C. When storage temperature was increased to 5°C or 10°C, shelf life was shortened to 14 or 10 days, respectively, in our previous observations. Also, commercial shelf life of 7 to 10 days is common, due to postharvest temperature fluctuations. The objective of this study was to establish neural network (NN models to predict the remaining shelf life (RSL under fluctuating postharvest temperatures. A box of 12 - 24 lettuce heads constituted a sample unit. The end of the shelf life of each head was determined when it showed initial signs of decay or yellowing. Air temperatures inside a shipping box were recorded. Daily average temperatures in storage and averaged shelf life of each box were used as inputs, and the RSL was modeled as an output. An R2 of 0.57 could be observed when a simple NN structure was employed. Since the "future" (or remaining storage temperatures were unavailable at the time of making a prediction, a second NN model was introduced to accommodate a range of future temperatures and associated shelf lives. Using such 2-stage NN models, an R2 of 0.61 could be achieved for predicting RSL. This study indicated that NN modeling has potential for cold chain quality control and shelf life prediction.
Ensemble ecosystem modeling for predicting ecosystem response to predator reintroduction.
Baker, Christopher M; Gordon, Ascelin; Bode, Michael
2017-04-01
Introducing a new or extirpated species to an ecosystem is risky, and managers need quantitative methods that can predict the consequences for the recipient ecosystem. Proponents of keystone predator reintroductions commonly argue that the presence of the predator will restore ecosystem function, but this has not always been the case, and mathematical modeling has an important role to play in predicting how reintroductions will likely play out. We devised an ensemble modeling method that integrates species interaction networks and dynamic community simulations and used it to describe the range of plausible consequences of 2 keystone-predator reintroductions: wolves (Canis lupus) to Yellowstone National Park and dingoes (Canis dingo) to a national park in Australia. Although previous methods for predicting ecosystem responses to such interventions focused on predicting changes around a given equilibrium, we used Lotka-Volterra equations to predict changing abundances through time. We applied our method to interaction networks for wolves in Yellowstone National Park and for dingoes in Australia. Our model replicated the observed dynamics in Yellowstone National Park and produced a larger range of potential outcomes for the dingo network. However, we also found that changes in small vertebrates or invertebrates gave a good indication about the potential future state of the system. Our method allowed us to predict when the systems were far from equilibrium. Our results showed that the method can also be used to predict which species may increase or decrease following a reintroduction and can identify species that are important to monitor (i.e., species whose changes in abundance give extra insight into broad changes in the system). Ensemble ecosystem modeling can also be applied to assess the ecosystem-wide implications of other types of interventions including assisted migration, biocontrol, and invasive species eradication. © 2016 Society for Conservation Biology.
Hybrid multiscale modeling and prediction of cancer cell behavior.
Zangooei, Mohammad Hossein; Habibi, Jafar
2017-01-01
Understanding cancer development crossing several spatial-temporal scales is of great practical significance to better understand and treat cancers. It is difficult to tackle this challenge with pure biological means. Moreover, hybrid modeling techniques have been proposed that combine the advantages of the continuum and the discrete methods to model multiscale problems. In light of these problems, we have proposed a new hybrid vascular model to facilitate the multiscale modeling and simulation of cancer development with respect to the agent-based, cellular automata and machine learning methods. The purpose of this simulation is to create a dataset that can be used for prediction of cell phenotypes. By using a proposed Q-learning based on SVR-NSGA-II method, the cells have the capability to predict their phenotypes autonomously that is, to act on its own without external direction in response to situations it encounters. Computational simulations of the model were performed in order to analyze its performance. The most striking feature of our results is that each cell can select its phenotype at each time step according to its condition. We provide evidence that the prediction of cell phenotypes is reliable. Our proposed model, which we term a hybrid multiscale modeling of cancer cell behavior, has the potential to combine the best features of both continuum and discrete models. The in silico results indicate that the 3D model can represent key features of cancer growth, angiogenesis, and its related micro-environment and show that the findings are in good agreement with biological tumor behavior. To the best of our knowledge, this paper is the first hybrid vascular multiscale modeling of cancer cell behavior that has the capability to predict cell phenotypes individually by a self-generated dataset.
Methods for Handling Missing Variables in Risk Prediction Models
Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.
2016-01-01
Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient
Should we believe model predictions of future climate change? (Invited)
Knutti, R.
2009-12-01
As computers get faster and our understanding of the climate system improves, climate models to predict the future are getting more complex by including more and more processes, and they are run at higher and higher resolution to resolve more of the small scale processes. As a result, some of the simulated features and structures, e.g. ocean eddies or tropical cyclones look surprisingly real. But are these deceptive? A pattern can look perfectly real but be in the wrong place. So can the current global models really provide the kind of information on local scales and on the quantities (e.g. extreme events) that the decision maker would need to know to invest for example in adaptation? A closer look indicates that evaluating skill of climate models and quantifying uncertainties in predictions is very difficult. This presentation shows that while models are improving in simulating the climate features we observe (e.g. the present day mean state, or the El Nino Southern Oscillation), the spread from multiple models in predicting future changes is often not decreasing. The main problem is that (unlike with weather forecasts for example) we cannot evaluate the model on a prediction (for example for the year 2100) and we have to use the present, or past changes as metrics of skills. But there are infinite ways of testing a model, and many metrics used to test models do not clearly relate to the prediction. Therefore there is little agreement in the community on metrics to separate ‘good’ and ‘bad’ models, and there is a concern that model development, evaluation and posterior weighting or ranking of models are all using the same datasets. While models are continuously improving in representing what we believe to be the key processes, many models also share ideas, parameterizations or even pieces of model code. The current models can therefore not be considered independent. Robustness of a model simulated result is often interpreted as increasing the confidence
A COMPACT MODEL FOR PREDICTING ROAD TRAFFIC NOISE
R. Golmohammadi ، M. Abbaspour ، P. Nassiri ، H. Mahjub
2009-07-01
Full Text Available Noise is one of the most important sources of pollution in the metropolitan areas. The recognition of road traffic noise as one of the main sources of environmental pollution has led to develop models that enable us to predict noise level from fundamental variables. Traffic noise prediction models are required as aids in the design of roads and sometimes in the assessment of existing, or envisaged changes in, traffic noise conditions. The purpose of this study was to design a prediction road traffic noise model from traffic variables and conditions of transportation in Iran.This paper is the result of a research conducted in the city of Hamadan with the ultimate objective of setting up a traffic noise model based on the traffic conditions of Iranian cities. Noise levels and other variables have been measured in 282 samples to develop a statistical regression model based on A-weighted equivalent noise level for Iranian road condition. The results revealed that the average LAeq in all stations was 69.04± 4.25 dB(A, the average speed of vehicles was 44.57±11.46 km/h and average traffic load was 1231.9 ± 910.2 V/h.The developed model has seven explanatory entrance variables in order to achieve a high regression coefficient (R2=0.901. Comparing means of predicted and measuring equivalent sound pressure level (LAeq showed small difference less than -0.42 dB(A and -0.77 dB(A for Tehran and Hamadan cities, respectively. The suggested road traffic noise model can be effectively used as a decision support tool for predicting equivalent sound pressure level index in the cities of Iran.
Shin, Woong-Hee; Kang, Xuejiao; Zhang, Jian; Kihara, Daisuke
2017-01-01
Protein tertiary structure prediction methods have matured in recent years. However, some proteins defy accurate prediction due to factors such as inadequate template structures. While existing model quality assessment methods predict global model quality relatively well, there is substantial room for improvement in local quality assessment, i.e. assessment of the error at each residue position in a model. Local quality is a very important information for practical applications of structure models such as interpreting/designing site-directed mutagenesis of proteins. We have developed a novel local quality assessment method for protein tertiary structure models. The method, named Graph-based Model Quality assessment method (GMQ), explicitly considers the predicted quality of spatially neighboring residues using a graph representation of a query protein structure model. GMQ uses conditional random field as its core of the algorithm, and performs a binary prediction of the quality of each residue in a model, indicating if a residue position is likely to be within an error cutoff or not. The accuracy of GMQ was improved by considering larger graphs to include quality information of more surrounding residues. Moreover, we found that using different edge weights in graphs reflecting different secondary structures further improves the accuracy. GMQ showed competitive performance on a benchmark for quality assessment of structure models from the Critical Assessment of Techniques for Protein Structure Prediction (CASP). PMID:28074879
Scaling predictive modeling in drug development with cloud computing.
Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola
2015-01-26
Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.
Xu, Meiyu; Li, Lina; Ohtsu, Hiroshi; Pittenger, Christopher
2015-05-19
Tics, such as are seen in Tourette syndrome (TS), are common and can cause profound morbidity, but they are poorly understood. Tics are potentiated by psychostimulants, stress, and sleep deprivation. Mutations in the gene histidine decarboxylase (Hdc) have been implicated as a rare genetic cause of TS, and Hdc knockout mice have been validated as a genetic model that recapitulates phenomenological and pathophysiological aspects of the disorder. Tic-like stereotypies in this model have not been observed at baseline but emerge after acute challenge with the psychostimulant d-amphetamine. We tested the ability of an acute stressor to stimulate stereotypies in this model, using tone fear conditioning. Hdc knockout mice acquired conditioned fear normally, as manifested by freezing during the presentation of a tone 48h after it had been paired with a shock. During the 30min following tone presentation, knockout mice showed increased grooming. Heterozygotes exhibited normal freezing and intermediate grooming. These data validate a new paradigm for the examination of tic-like stereotypies in animals without pharmacological challenge and enhance the face validity of the Hdc knockout mouse as a pathophysiologically grounded model of tic disorders.
Small GSK-3 Inhibitor Shows Efficacy in a Motor Neuron Disease Murine Model Modulating Autophagy.
de Munck, Estefanía; Palomo, Valle; Muñoz-Sáez, Emma; Perez, Daniel I; Gómez-Miguel, Begoña; Solas, M Teresa; Gil, Carmen; Martínez, Ana; Arahuetes, Rosa M
2016-01-01
Amyotrophic lateral sclerosis (ALS) is a progressive motor neuron degenerative disease that has no effective treatment up to date. Drug discovery tasks have been hampered due to the lack of knowledge in its molecular etiology together with the limited animal models for research. Recently, a motor neuron disease animal model has been developed using β-N-methylamino-L-alanine (L-BMAA), a neurotoxic amino acid related to the appearing of ALS. In the present work, the neuroprotective role of VP2.51, a small heterocyclic GSK-3 inhibitor, is analysed in this novel murine model together with the analysis of autophagy. VP2.51 daily administration for two weeks, starting the first day after L-BMAA treatment, leads to total recovery of neurological symptoms and prevents the activation of autophagic processes in rats. These results show that the L-BMAA murine model can be used to test the efficacy of new drugs. In addition, the results confirm the therapeutic potential of GSK-3 inhibitors, and specially VP2.51, for the disease-modifying future treatment of motor neuron disorders like ALS.
Small GSK-3 Inhibitor Shows Efficacy in a Motor Neuron Disease Murine Model Modulating Autophagy
de Munck, Estefanía; Palomo, Valle; Muñoz-Sáez, Emma; Perez, Daniel I.; Gómez-Miguel, Begoña; Solas, M. Teresa; Gil, Carmen; Martínez, Ana; Arahuetes, Rosa M.
2016-01-01
Amyotrophic lateral sclerosis (ALS) is a progressive motor neuron degenerative disease that has no effective treatment up to date. Drug discovery tasks have been hampered due to the lack of knowledge in its molecular etiology together with the limited animal models for research. Recently, a motor neuron disease animal model has been developed using β-N-methylamino-L-alanine (L-BMAA), a neurotoxic amino acid related to the appearing of ALS. In the present work, the neuroprotective role of VP2.51, a small heterocyclic GSK-3 inhibitor, is analysed in this novel murine model together with the analysis of autophagy. VP2.51 daily administration for two weeks, starting the first day after L-BMAA treatment, leads to total recovery of neurological symptoms and prevents the activation of autophagic processes in rats. These results show that the L-BMAA murine model can be used to test the efficacy of new drugs. In addition, the results confirm the therapeutic potential of GSK-3 inhibitors, and specially VP2.51, for the disease-modifying future treatment of motor neuron disorders like ALS. PMID:27631495
Selen, Ebru Selin; Bolandnazar, Zeinab; Tonelli, Marco; Bütz, Daniel E; Haviland, Julia A; Porter, Warren P; Assadi-Porter, Fariba M
2015-08-07
Polycystic ovary syndrome (PCOS) is associated with metabolic and endocrine disorders in women of reproductive age. The etiology of PCOS is still unknown. Mice prenatally treated with glucocorticoids exhibit metabolic disturbances that are similar to those seen in women with PCOS. We used an untargeted nuclear magnetic resonance (NMR)-based metabolomics approach to understand the metabolic changes occurring in the plasma and kidney over time in female glucocorticoid-treated (GC-treated) mice. There are significant changes in plasma amino acid levels (valine, tyrosine, and proline) and their intermediates (2-hydroxybutyrate, 4-aminobutyrate, and taurine), whereas in kidneys, the TCA cycle metabolism (citrate, fumarate, and succinate) and the pentose phosphate (PP) pathway products (inosine and uracil) are significantly altered (p metabolic substrates in the plasma and kidneys of treated mice are associated with altered amino acid metabolism, increased cytoplasmic PP, and increased mitochondrial activity, leading to a more oxidized state. This study identifies biomarkers associated with metabolic dysfunction in kidney mitochondria of a prenatal gluococorticoid-treated mouse model of PCOS that may be used as early predictive biomarkers of oxidative stress in the PCOS metabolic disorder in women.
MTO1-deficient mouse model mirrors the human phenotype showing complex I defect and cardiomyopathy.
Lore Becker
Full Text Available Recently, mutations in the mitochondrial translation optimization factor 1 gene (MTO1 were identified as causative in children with hypertrophic cardiomyopathy, lactic acidosis and respiratory chain defect. Here, we describe an MTO1-deficient mouse model generated by gene trap mutagenesis that mirrors the human phenotype remarkably well. As in patients, the most prominent signs and symptoms were cardiovascular and included bradycardia and cardiomyopathy. In addition, the mutant mice showed a marked worsening of arrhythmias during induction and reversal of anaesthesia. The detailed morphological and biochemical workup of murine hearts indicated that the myocardial damage was due to complex I deficiency and mitochondrial dysfunction. In contrast, neurological examination was largely normal in Mto1-deficient mice. A translational consequence of this mouse model may be to caution against anaesthesia-related cardiac arrhythmias which may be fatal in patients.
Nonlinear turbulence models for predicting strong curvature effects
XU Jing-lei; MA Hui-yang; HUANG Yu-ning
2008-01-01
Prediction of the characteristics of turbulent flows with strong streamline curvature, such as flows in turbomachines, curved channel flows, flows around airfoils and buildings, is of great importance in engineering applicatious and poses a very practical challenge for turbulence modeling. In this paper, we analyze qualitatively the curvature effects on the structure of turbulence and conduct numerical simulations of a turbulent U- duct flow with a number of turbulence models in order to assess their overall performance. The models evaluated in this work are some typical linear eddy viscosity turbulence models, nonlinear eddy viscosity turbulence models (NLEVM) (quadratic and cubic), a quadratic explicit algebraic stress model (EASM) and a Reynolds stress model (RSM) developed based on the second-moment closure. Our numerical results show that a cubic NLEVM that performs considerably well in other benchmark turbulent flows, such as the Craft, Launder and Suga model and the Huang and Ma model, is able to capture the major features of the highly curved turbulent U-duct flow, including the damping of turbulence near the convex wall, the enhancement of turbulence near the concave wall, and the subsequent turbulent flow separation. The predictions of the cubic models are quite close to that of the RSM, in relatively good agreement with the experimental data, which suggests that these inodels may be employed to simulate the turbulent curved flows in engineering applications.
Estimating the magnitude of prediction uncertainties for the APLE model
Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...
Prediction of Catastrophes: an experimental model
Peters, Randall D; Pomeau, Yves
2012-01-01
Catastrophes of all kinds can be roughly defined as short duration-large amplitude events following and followed by long periods of "ripening". Major earthquakes surely belong to the class of 'catastrophic' events. Because of the space-time scales involved, an experimental approach is often difficult, not to say impossible, however desirable it could be. Described in this article is a "laboratory" setup that yields data of a type that is amenable to theoretical methods of prediction. Observations are made of a critical slowing down in the noisy signal of a solder wire creeping under constant stress. This effect is shown to be a fair signal of the forthcoming catastrophe in both of two dynamical models. The first is an "abstract" model in which a time dependent quantity drifts slowly but makes quick jumps from time to time. The second is a realistic physical model for the collective motion of dislocations (the Ananthakrishna set of equations for creep). Hope thus exists that similar changes in the response to ...
Predictive modeling of low solubility semiconductor alloys
Rodriguez, Garrett V.; Millunchick, Joanna M.
2016-09-01
GaAsBi is of great interest for applications in high efficiency optoelectronic devices due to its highly tunable bandgap. However, the experimental growth of high Bi content films has proven difficult. Here, we model GaAsBi film growth using a kinetic Monte Carlo simulation that explicitly takes cation and anion reactions into account. The unique behavior of Bi droplets is explored, and a sharp decrease in Bi content upon Bi droplet formation is demonstrated. The high mobility of simulated Bi droplets on GaAsBi surfaces is shown to produce phase separated Ga-Bi droplets as well as depressions on the film surface. A phase diagram for a range of growth rates that predicts both Bi content and droplet formation is presented to guide the experimental growth of high Bi content GaAsBi films.
Distributed model predictive control made easy
Negenborn, Rudy
2014-01-01
The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems. This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...
Leptogenesis in minimal predictive seesaw models
Björkeroth, Fredrik; de Anda, Francisco J.; de Medeiros Varzielas, Ivo; King, Stephen F.
2015-10-01
We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to ( ν e , ν μ , ν τ ) proportional to (0, 1, 1) and (1, n, n - 2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A 4 vacuum alignment provides the required Yukawa structures with n = 3, while a {{Z}}_9 symmetry fixes the relatives phase to be a ninth root of unity.
Predictive modelling of contagious deforestation in the Brazilian Amazon.
Isabel M D Rosa
Full Text Available Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1 it is probabilistic and quantifies uncertainty around predictions and parameters; (2 the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3 deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia"-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy, annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is
Predictive modelling of contagious deforestation in the Brazilian Amazon.
Rosa, Isabel M D; Purves, Drew; Souza, Carlos; Ewers, Robert M
2013-01-01
Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1) it is probabilistic and quantifies uncertainty around predictions and parameters; (2) the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3) deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia")-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy), annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is currently
Ac Synchronous Servo Based On The Armature Voltage Prediction Model
Hoshino, Akihiro; Kuromaru, Hiroshi; Kobayashi, Shinichi
1987-10-01
A new control method of the AC synchro-nous servo-system (Brushless DC servo-system) is discussed. The new system is based on the armature voltage prediction model. Without a resolver-digital-conver-ter nor a tachometer-generator, the resolver provides following three signals to the system immediately, they are the current command, the induced voltage, and the rotor speed. The new method realizes a simple hardware configuration. Experimental results show a good performance of the system.
Cotter Finbarr E
2009-08-01
Full Text Available Abstract Background Down syndrome (DS, caused by trisomy of human chromosome 21 (HSA21, is the most common genetic birth defect. Congenital heart defects (CHD are seen in 40% of DS children, and >50% of all atrioventricular canal defects in infancy are caused by trisomy 21, but the causative genes remain unknown. Results Here we show that aberrant adhesion and proliferation of DS cells can be reproduced using a transchromosomic model of DS (mouse fibroblasts bearing supernumerary HSA21. We also demonstrate a deacrease of cell migration in transchromosomic cells independently of their adhesion properties. We show that cell-autonomous proteome response to the presence of Collagen VI in extracellular matrix is strongly affected by trisomy 21. Conclusion This set of experiments establishes a new model system for genetic dissection of the specific HSA21 gene-overdose contributions to aberrant cell migration, adhesion, proliferation and specific proteome response to collagen VI, cellular phenotypes linked to the pathogenesis of CHD.
Serena, Elena; Zatti, Susi; Zoso, Alice; Lo Verso, Francesca; Tedesco, F Saverio; Cossu, Giulio; Elvassore, Nicola
2016-12-01
: Restoration of the protein dystrophin on muscle membrane is the goal of many research lines aimed at curing Duchenne muscular dystrophy (DMD). Results of ongoing preclinical and clinical trials suggest that partial restoration of dystrophin might be sufficient to significantly reduce muscle damage. Different myogenic progenitors are candidates for cell therapy of muscular dystrophies, but only satellite cells and pericytes have already entered clinical experimentation. This study aimed to provide in vitro quantitative evidence of the ability of mesoangioblasts to restore dystrophin, in terms of protein accumulation and distribution, within myotubes derived from DMD patients, using a microengineered model. We designed an ad hoc experimental strategy to miniaturize on a chip the standard process of muscle regeneration independent of variables such as inflammation and fibrosis. It is based on the coculture, at different ratios, of human dystrophin-positive myogenic progenitors and dystrophin-negative myoblasts in a substrate with muscle-like physiological stiffness and cell micropatterns. Results showed that both healthy myoblasts and mesoangioblasts restored dystrophin expression in DMD myotubes. However, mesoangioblasts showed unexpected efficiency with respect to myoblasts in dystrophin production in terms of the amount of protein produced (40% vs. 15%) and length of the dystrophin membrane domain (210-240 µm vs. 40-70 µm). These results show that our microscaled in vitro model of human DMD skeletal muscle validated previous in vivo preclinical work and may be used to predict efficacy of new methods aimed at enhancing dystrophin accumulation and distribution before they are tested in vivo, reducing time, costs, and variability of clinical experimentation. This study aimed to provide in vitro quantitative evidence of the ability of human mesoangioblasts to restore dystrophin, in terms of protein accumulation and distribution, within myotubes derived from
A prediction model for progressive disease in systemic sclerosis
Meijs, Jessica; Schouffoer, Anne A; Ajmone Marsan, Nina; Stijnen, Theo; Putter, Hein; Ninaber, Maarten K; Huizinga, Tom W J; de Vries-Bouwstra, Jeska K
2015-01-01
Objective To develop a model that assesses the risk for progressive disease in patients with systemic sclerosis (SSc) over the short term, in order to guide clinical management. Methods Baseline characteristics and 1 year follow-up results of 163 patients with SSc referred to a multidisciplinary healthcare programme were evaluated. Progressive disease was defined as: death, ≥10% decrease in forced vital capacity, ≥15% decrease in diffusing capacity for carbon monoxide, ≥10% decrease in body weight, ≥30% decrease in estimated-glomerular filtration rate, ≥30% increase in modified Rodnan Skin Score (with Δ≥5) or ≥0.25 increase in Scleroderma Health Assessment Questionnaire. The number of patients with progressive disease was determined. Univariable and multivariable logistic regression analyses were used to assess the probability of progressive disease for each individual patient. Performance of the prediction model was evaluated using a calibration plot and area under the receiver operating characteristic curve. Results 63 patients had progressive disease, including 8 patients who died ≤18 months after first evaluation. Multivariable analysis showed that friction rubs, proximal muscular weakness and decreased maximum oxygen uptake as % predicted, adjusted for age, gender and use of immunosuppressive therapy at baseline, were significantly associated with progressive disease. Using the prediction model, the predicted chance for progressive disease increased from a pretest chance of 37% to 67–89%. Conclusions Using the prediction model, the chance for progressive disease for individual patients could be doubled. Friction rubs, proximal muscular weakness and maximum oxygen uptake as % predicted were identified as relevant parameters. PMID:26688749
Formalization of the model of the enterprise insolvency risk prediction
Elena V. Shirinkina
2015-12-01
Full Text Available Objective to improve the conceptual apparatus and analytical procedures of insolvency risk identification. Methods general scientific methods of systemic and comparative analysis economicstatistical and dynamic analysis of economic processes and phenomena. Results nowadays managing the insolvency risk is relevant for any company regardless of the economy sector. Instability manifests itself through the uncertainty of the directions of the external environment changes and their high frequency. Analysis of the economic literature showed that currently there is no single approach to systematization of methods for insolvency risk prediction which means that there is no objective view on tools that can be used to monitor the insolvency risk. In this respect scientific and practical search of representative indicators for the formalization of the models predicting the insolvency is very important. Therefore the study has solved the following tasks defined the nature of the insolvency risk and its identification in the process of financial relations in management system proved the representativeness of the indicators in the insolvency risk prediction and formed the model of the risk insolvency prediction. Scientific novelty grounding the model of risk insolvency prediction. Practical significance development of a theoretical framework to address issues arising in the diagnosis of insolvent enterprises and application of the results obtained in the practice of the bankruptcy institution bodies. The presented model allows to predict the insolvency risk of the enterprise through the general development trend and the fluctuation boundaries of bankruptcy risk to determine the significance of each indicatorfactor its quantitative impact and therefore to avoid the risk of the enterprise insolvency. nbsp
Estimating carbon and showing impacts of drought using satellite data in regression-tree models
Boyte, Stephen; Wylie, Bruce K.; Howard, Danny; Dahal, Devendra; Gilmanov, Tagir G.
2018-01-01
Integrating spatially explicit biogeophysical and remotely sensed data into regression-tree models enables the spatial extrapolation of training data over large geographic spaces, allowing a better understanding of broad-scale ecosystem processes. The current study presents annual gross primary production (GPP) and annual ecosystem respiration (RE) for 2000–2013 in several short-statured vegetation types using carbon flux data from towers that are located strategically across the conterminous United States (CONUS). We calculate carbon fluxes (annual net ecosystem production [NEP]) for each year in our study period, which includes 2012 when drought and higher-than-normal temperatures influence vegetation productivity in large parts of the study area. We present and analyse carbon flux dynamics in the CONUS to better understand how drought affects GPP, RE, and NEP. Model accuracy metrics show strong correlation coefficients (r) (r ≥ 94%) between training and estimated data for both GPP and RE. Overall, average annual GPP, RE, and NEP are relatively constant throughout the study period except during 2012 when almost 60% less carbon is sequestered than normal. These results allow us to conclude that this modelling method effectively estimates carbon dynamics through time and allows the exploration of impacts of meteorological anomalies and vegetation types on carbon dynamics.
Economic decision making and the application of nonparametric prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2008-01-01
Sustained increases in energy prices have focused attention on gas resources in low-permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are often large. Planning and development decisions for extraction of such resources must be areawide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm, the decision to enter such plays depends on reconnaissance-level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional-scale cost functions. The context of the worked example is the Devonian Antrim-shale gas play in the Michigan basin. One finding relates to selection of the resource prediction model to be used with economic models. Models chosen because they can best predict aggregate volume over larger areas (many hundreds of sites) smooth out granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined arbitrarily by extraneous factors. The analysis shows a 15-20% gain in gas volume when these simple models are applied to order drilling prospects strategically rather than to choose drilling locations randomly. Copyright ?? 2008 Society of Petroleum Engineers.
Predictive modeling of nanomaterial exposure effects in biological systems
Liu X
2013-09-01
Full Text Available Xiong Liu,1 Kaizhi Tang,1 Stacey Harper,2 Bryan Harper,2 Jeffery A Steevens,3 Roger Xu1 1Intelligent Automation, Inc., Rockville, MD, USA; 2Department of Environmental and Molecular Toxicology, School of Chemical, Biological, and Environmental Engineering, Oregon State University, Corvallis, OR, USA; 3ERDC Environmental Laboratory, Vicksburg, MS, USA Background: Predictive modeling of the biological effects of nanomaterials is critical for industry and policymakers to assess the potential hazards resulting from the application of engineered nanomaterials. Methods: We generated an experimental dataset on the toxic effects experienced by embryonic zebrafish due to exposure to nanomaterials. Several nanomaterials were studied, such as metal nanoparticles, dendrimer, metal oxide, and polymeric materials. The embryonic zebrafish metric (EZ Metric was used as a screening-level measurement representative of adverse effects. Using the dataset, we developed a data mining approach to model the toxic endpoints and the overall biological impact of nanomaterials. Data mining techniques, such as numerical prediction, can assist analysts in developing risk assessment models for nanomaterials. Results: We found several important attributes that contribute to the 24 hours post-fertilization (hpf mortality, such as dosage concentration, shell composition, and surface charge. These findings concur with previous studies on nanomaterial toxicity using embryonic zebrafish. We conducted case studies on modeling the overall effect/impact of nanomaterials and the specific toxic endpoints such as mortality, delayed development, and morphological malformations. The results show that we can achieve high prediction accuracy for certain biological effects, such as 24 hpf mortality, 120 hpf mortality, and 120 hpf heart malformation. The results also show that the weighting scheme for individual biological effects has a significant influence on modeling the overall impact of
A Novel Exercise Thermophysiology Comfort Prediction Model with Fuzzy Logic
Nan Jia
2016-01-01
Full Text Available Participation in a regular exercise program can improve health status and contribute to an increase in life expectancy. However, exercise accidents like dehydration, exertional heatstroke, syncope, and even sudden death exist. If these accidents can be analyzed or predicted before they happen, it will be beneficial to alleviate or avoid uncomfortable or unacceptable human disease. Therefore, an exercise thermophysiology comfort prediction model is needed. In this paper, coupling the thermal interactions among human body, clothing, and environment (HCE as well as the human body physiological properties, a human thermophysiology regulatory model is designed to enhance the human thermophysiology simulation in the HCE system. Some important thermal and physiological performances can be simulated. According to the simulation results, a human exercise thermophysiology comfort prediction method based on fuzzy inference system is proposed. The experiment results show that there is the same prediction trend between the experiment result and simulation result about thermophysiology comfort. At last, a mobile application platform for human exercise comfort prediction is designed and implemented.
Comparing model predictions for ecosystem-based management
Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste
2016-01-01
Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...
Remaining Useful Lifetime (RUL - Probabilistic Predictive Model
Ephraim Suhir
2011-01-01
Full Text Available Reliability evaluations and assurances cannot be delayed until the device (system is fabricated and put into operation. Reliability of an electronic product should be conceived at the early stages of its design; implemented during manufacturing; evaluated (considering customer requirements and the existing specifications, by electrical, optical and mechanical measurements and testing; checked (screened during manufacturing (fabrication; and, if necessary and appropriate, maintained in the field during the product’s operation Simple and physically meaningful probabilistic predictive model is suggested for the evaluation of the remaining useful lifetime (RUL of an electronic device (system after an appreciable deviation from its normal operation conditions has been detected, and the increase in the failure rate and the change in the configuration of the wear-out portion of the bathtub has been assessed. The general concepts are illustrated by numerical examples. The model can be employed, along with other PHM forecasting and interfering tools and means, to evaluate and to maintain the high level of the reliability (probability of non-failure of a device (system at the operation stage of its lifetime.
A Predictive Model of Geosynchronous Magnetopause Crossings
Dmitriev, A; Chao, J -K
2013-01-01
We have developed a model predicting whether or not the magnetopause crosses geosynchronous orbit at given location for given solar wind pressure Psw, Bz component of interplanetary magnetic field (IMF) and geomagnetic conditions characterized by 1-min SYM-H index. The model is based on more than 300 geosynchronous magnetopause crossings (GMCs) and about 6000 minutes when geosynchronous satellites of GOES and LANL series are located in the magnetosheath (so-called MSh intervals) in 1994 to 2001. Minimizing of the Psw required for GMCs and MSh intervals at various locations, Bz and SYM-H allows describing both an effect of magnetopause dawn-dusk asymmetry and saturation of Bz influence for very large southward IMF. The asymmetry is strong for large negative Bz and almost disappears when Bz is positive. We found that the larger amplitude of negative SYM-H the lower solar wind pressure is required for GMCs. We attribute this effect to a depletion of the dayside magnetic field by a storm-time intensification of t...
Forced versus coupled dynamics in Earth system modelling and prediction
B. Knopf
2005-01-01
Full Text Available We compare coupled nonlinear climate models and their simplified forced counterparts with respect to predictability and phase space topology. Various types of uncertainty plague climate change simulation, which is, in turn, a crucial element of Earth System modelling. Since the currently preferred strategy for simulating the climate system, or the Earth System at large, is the coupling of sub-system modules (representing, e.g. atmosphere, oceans, global vegetation, this paper explicitly addresses the errors and indeterminacies generated by the coupling procedure. The focus is on a comparison of forced dynamics as opposed to fully, i.e. intrinsically, coupled dynamics. The former represents a particular type of simulation, where the time behaviour of one complex systems component is prescribed by data or some other external information source. Such a simplifying technique is often employed in Earth System models in order to save computing resources, in particular when massive model inter-comparisons need to be carried out. Our contribution to the debate is based on the investigation of two representative model examples, namely (i a low-dimensional coupled atmosphere-ocean simulator, and (ii a replica-like simulator embracing corresponding components.Whereas in general the forced version (ii is able to mimic its fully coupled counterpart (i, we show in this paper that for a considerable fraction of parameter- and state-space, the two approaches qualitatively differ. Here we take up a phenomenon concerning the predictability of coupled versus forced models that was reported earlier in this journal: the observation that the time series of the forced version display artificial predictive skill. We present an explanation in terms of nonlinear dynamical theory. In particular we observe an intermittent version of artificial predictive skill, which we call on-off synchronization, and trace it back to the appearance of unstable periodic orbits. We also
Prediction of Farmers’ Income and Selection of Model ARIMA
2010-01-01
Based on the research technology of scholars’ prediction of farmers’ income and the data of per capita annual net income in rural households in Henan Statistical Yearbook from 1979 to 2009,it is found that time series of farmers’ income is in accordance with I(2)non-stationary process.The order-determination and identification of the model are achieved by adopting the correlogram-based analytical method of Box-Jenkins.On the basis of comparing a group of model properties with different parameters,model ARIMA(4,2,2)is built up.The testing result shows that the residual error of the selected model is white noise and accords with the normal distribution,which can be used to predict farmers’ income.The model prediction indicates that income in rural households will continue to increase from 2009 to 2012 and will reach the value of 2 282.4,2 502.9,2 686.9 and 2 884.5 respectively.The growth speed will go down from fast to slow with weak sustainability.
Prediction of benzodiazepines solubility using different cosolvency models.
Nokhodchi, A; Shokri, J; Barzegar-Jalali, M; Ghafourian, T
2002-07-01
The solubility of four benzodiazepines (BZPs) including diazepam (DIZ), lorazepam (LRZ) clonazepam (CLZ), and chlordiazepoxide (CHZ) in water-cosolvent (ethanol propylene glycol and polyethylene glycol 200) binary systems were studied. In general, increasing the volume fraction of cosolvents resulted in an increase in the solubility of benzodiazepines. The mole fraction solubilities were fitted to the various cosolvency models, namely extended Hildebrand approach (EHA), excess free energy (EFE), combined nearly ideal binary solvent/Redlich-Kister (CNIBS/R-K), general single model (GSM), mixture response surface (MR-S). double log-log (DL-L), and linear double log-log (LDL-L). The results showed that DL-L model was the best model in predicting the solubility of all drugs in all the water-cosolvent mixtures (OAE% = 4.71). The minimum and maximum errors were observed for benzodiazepine's solubility in water-propylene glycol and water-ethanol mixtures which were 2.67 and 11.78%, respectively. Three models (EFE, CNIBS/R-K and LDL-L) were chosen as general models for solubility descriptions of these structurally similar drugs in each of the solvent systems. Among these models, the EFE model was the best in predicting the solubility of benzodiazepines in binary solvent mixtures (OAE% = 11.19).
Modeling and Prediction of Hot Deformation Flow Curves
Mirzadeh, Hamed; Cabrera, Jose Maria; Najafizadeh, Abbas
2012-01-01
The modeling of hot flow stress and prediction of flow curves for unseen deformation conditions are important in metal-forming processes because any feasible mathematical simulation needs accurate flow description. In the current work, in an attempt to summarize, generalize, and introduce efficient methods, the dynamic recrystallization (DRX) flow curves of a 17-4 PH martensitic precipitation hardening stainless steel, a medium carbon microalloyed steel, and a 304 H austenitic stainless steel were modeled and predicted using (1) a hyperbolic sine equation with strain dependent constants, (2) a developed constitutive equation in a simple normalized stress-normalized strain form and its modified version, and (3) a feed-forward artificial neural network (ANN). These methods were critically discussed, and the ANN technique was found to be the best for the modeling available flow curves; however, the developed constitutive equation showed slightly better performance than that of ANN and significantly better predicted values than those of the hyperbolic sine equation in prediction of flow curves for unseen deformation conditions.
Survival model construction guided by fit and predictive strength.
Chauvel, Cécile; O'Quigley, John
2016-10-05
Survival model construction can be guided by goodness-of-fit techniques as well as measures of predictive strength. Here, we aim to bring together these distinct techniques within the context of a single framework. The goal is how to best characterize and code the effects of the variables, in particular time dependencies, when taken either singly or in combination with other related covariates. Simple graphical techniques can provide an immediate visual indication as to the goodness-of-fit but, in cases of departure from model assumptions, will point in the direction of a more involved and richer alternative model. These techniques appear to be intuitive. This intuition is backed up by formal theorems that underlie the process of building richer models from simpler ones. Measures of predictive strength are used in conjunction with these goodness-of-fit techniques and, again, formal theorems show that these measures can be used to help identify models closest to the unknown non-proportional hazards mechanism that we can suppose generates the observations. Illustrations from studies in breast cancer show how these tools can be of help in guiding the practical problem of efficient model construction for survival data.
Romagnuolo, Joseph; Andrews, Christopher N; Bain, Vincent G; Bonacini, Maurizio; Cotler, Scott J; Ma, Mang; Sherman, Morris
2005-11-01
A recent single-center multivariate analysis of hepatitis C (HCV) patients showed that having any two criteria: 1) ferritin > or =200 microg/l and 2) spider nevi and/or albumin clinical prediction model using an independent multicenter sample. Eighty-one patients with previously untreated active chronic HCV underwent physical examination, laboratory investigation, and liver biopsy. Biopsies were read, in blinded fashion, by a single pathologist, using a modified Hytiroglou (1995) scale. The clinical scoring system was correlated with histology; likelihood ratios (LRs), Fisher's exact p-values, and receiver operating characteristics (ROCs) were calculated. Data recording was complete in 77 and 38 patients regarding fibrotic stage and inflammatory grade, respectively. For fibrosis, 3/3 patients with any three criteria (LR 17, positive predictive value (PPV) 100%), 4/5 patients with any two criteria (LR 5.1), and 15/47 with no criteria (LR 0.6, negative predictive value (NPV) 68%) had stage 2 or greater fibrosis on biopsy (p=0.01). For inflammation, 5/5 patients with both criteria (LR 15, PPV 100%), and 8/19 patients with no criteria (LR 0.5, NPV 58%) had moderate-severe inflammation on liver biopsy (p=0.036). When missing variables were assumed to be normal, recalculated LRs were almost identical. An alanine aminotransferase (ALAT) level data set has validated our published model which uses simple clinical variables accurately and significantly to predict hepatic fibrosis and inflammation in HCV patients.
Rubber particle proteins, HbREF and HbSRPP, show different interactions with model membranes.
Berthelot, Karine; Lecomte, Sophie; Estevez, Yannick; Zhendre, Vanessa; Henry, Sarah; Thévenot, Julie; Dufourc, Erick J; Alves, Isabel D; Peruch, Frédéric
2014-01-01
The biomembrane surrounding rubber particles from the hevea latex is well known for its content of numerous allergen proteins. HbREF (Hevb1) and HbSRPP (Hevb3) are major components, linked on rubber particles, and they have been shown to be involved in rubber synthesis or quality (mass regulation), but their exact function is still to be determined. In this study we highlighted the different modes of interactions of both recombinant proteins with various membrane models (lipid monolayers, liposomes or supported bilayers, and multilamellar vesicles) to mimic the latex particle membrane. We combined various biophysical methods (polarization-modulation-infrared reflection-adsorption spectroscopy (PM-IRRAS)/ellipsometry, attenuated-total reflectance Fourier-transform infrared (ATR-FTIR), solid-state nuclear magnetic resonance (NMR), plasmon waveguide resonance (PWR), fluorescence spectroscopy) to elucidate their interactions. Small rubber particle protein (SRPP) shows less affinity than rubber elongation factor (REF) for the membranes but displays a kind of "covering" effect on the lipid headgroups without disturbing the membrane integrity. Its structure is conserved in the presence of lipids. Contrarily, REF demonstrates higher membrane affinity with changes in its aggregation properties, the amyloid nature of REF, which we previously reported, is not favored in the presence of lipids. REF binds and inserts into membranes. The membrane integrity is highly perturbed, and we suspect that REF is even able to remove lipids from the membrane leading to the formation of mixed micelles. These two homologous proteins show affinity to all membrane models tested but neatly differ in their interacting features. This could imply differential roles on the surface of rubber particles.
Antiparasitic mebendazole shows survival benefit in 2 preclinical models of glioblastoma multiforme.
Bai, Ren-Yuan; Staedtke, Verena; Aprhys, Colette M; Gallia, Gary L; Riggins, Gregory J
2011-09-01
Glioblastoma multiforme (GBM) is the most common and aggressive brain cancer, and despite treatment advances, patient prognosis remains poor. During routine animal studies, we serendipitously observed that fenbendazole, a benzimidazole antihelminthic used to treat pinworm infection, inhibited brain tumor engraftment. Subsequent in vitro and in vivo experiments with benzimidazoles identified mebendazole as the more promising drug for GBM therapy. In GBM cell lines, mebendazole displayed cytotoxicity, with half-maximal inhibitory concentrations ranging from 0.1 to 0.3 µM. Mebendazole disrupted microtubule formation in GBM cells, and in vitro activity was correlated with reduced tubulin polymerization. Subsequently, we showed that mebendazole significantly extended mean survival up to 63% in syngeneic and xenograft orthotopic mouse glioma models. Mebendazole has been approved by the US Food and Drug Administration for parasitic infections, has a long track-record of safe human use, and was effective in our animal models with doses documented as safe in humans. Our findings indicate that mebendazole is a possible novel anti-brain tumor therapeutic that could be further tested in clinical trials.
Keeney, Paula M; Dunham, Lisa D; Quigley, Caitlin K; Morton, Stephanie L; Bergquist, Kristen E; Bennett, James P
2009-12-01
Sporadic Parkinson's disease (sPD) is a nervous system-wide disease that presents with a bradykinetic movement disorder and frequently progresses to include depression and cognitive impairment. Cybrid models of sPD are based on expression of sPD platelet mitochondrial DNA (mtDNA) in neural cells and demonstrate some similarities to sPD brains. In sPD and CTL cybrids we characterized aspects of mitochondrial biogenesis, mtDNA genomics, composition of the respirasome and the relationships among isolated mitochondrial and intact cell respiration. Cybrid mtDNA levels varied and correlated with expression of PGC-1 alpha, a transcriptional co-activator regulator of mitochondrial biogenesis. Levels of mtDNA heteroplasmic mutations were asymmetrically distributed across the mitochondrial genome; numbers of heteroplasmies were more evenly distributed. Neither levels nor numbers of heteroplasmies distinguished sPD from CTL. sPD cybrid mitochondrial ETC subunit protein levels were not altered. Isolated mitochondrial complex I respiration rates showed limited correlation with whole cell complex I respiration rates in both sPD and CTL cybrids. Intact cell respiration during the normoxic-anoxic transition yielded K(m) values for oxygen that directly related to respiration rates in CTL but not in sPD cell lines. Both sPD and CTL cybrid cells are substantially heterogeneous in mitochondrial genomic and physiologic properties. Our results suggest that mtDNA depletion may occur in sPD neurons and could reflect impairment of mitochondrial biogenesis. Cybrids remain a valuable model for some aspects of sPD but their heterogeneity mitigates against a simple designation of sPD phenotype in this cell model.
Model for predicting mountain wave field uncertainties
Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal
2017-04-01
Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of
Smooth-Threshold Multivariate Genetic Prediction with Unbiased Model Selection.
Ueki, Masao; Tamiya, Gen
2016-04-01
We develop a new genetic prediction method, smooth-threshold multivariate genetic prediction, using single nucleotide polymorphisms (SNPs) data in genome-wide association studies (GWASs). Our method consists of two stages. At the first stage, unlike the usual discontinuous SNP screening as used in the gene score method, our method continuously screens SNPs based on the output from standard univariate analysis for marginal association of each SNP. At the second stage, the predictive model is built by a generalized ridge regression simultaneously using the screened SNPs with SNP weight determined by the strength of marginal association. Continuous SNP screening by the smooth thresholding not only makes prediction stable but also leads to a closed form expression of generalized degrees of freedom (GDF). The GDF leads to the Stein's unbiased risk estimation (SURE), which enables data-dependent choice of optimal SNP screening cutoff without using cross-validation. Our method is very rapid because computationally expensive genome-wide scan is required only once in contrast to the penalized regression methods including lasso and elastic net. Simulation studies that mimic real GWAS data with quantitative and binary traits demonstrate that the proposed method outperforms the gene score method and genomic best linear unbiased prediction (GBLUP), and also shows comparable or sometimes improved performance with the lasso and elastic net being known to have good predictive ability but with heavy computational cost. Application to whole-genome sequencing (WGS) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) exhibits that the proposed method shows higher predictive power than the gene score and GBLUP methods.
Lünsmann, Vanessa; Kappelmeyer, Uwe; Taubert, Anja; Nijenhuis, Ivonne; von Bergen, Martin; Heipieper, Hermann J; Müller, Jochen A; Jehmlich, Nico
2016-07-15
Constructed wetlands (CWs) are successfully applied for the treatment of waters contaminated with aromatic compounds. In these systems, plants provide oxygen and root exudates to the rhizosphere and thereby stimulate microbial degradation processes. Root exudation of oxygen and organic compounds depends on photosynthetic activity and thus may show day-night fluctuations. While diurnal changes in CW effluent composition have been observed, information on respective fluctuations of bacterial activity are scarce. We investigated microbial processes in a CW model system treating toluene-contaminated water which showed diurnal oscillations of oxygen concentrations using metaproteomics. Quantitative real-time PCR was applied to assess diurnal expression patterns of genes involved in aerobic and anaerobic toluene degradation. We observed stable aerobic toluene turnover by Burkholderiales during the day and night. Polyhydroxyalkanoate synthesis was upregulated in these bacteria during the day, suggesting that they additionally feed on organic root exudates while reutilizing the stored carbon compounds during the night via the glyoxylate cycle. Although mRNA copies encoding the anaerobic enzyme benzylsuccinate synthase (bssA) were relatively abundant and increased slightly at night, the corresponding protein could not be detected in the CW model system. Our study provides insights into diurnal patterns of microbial processes occurring in the rhizosphere of an aquatic ecosystem. Constructed wetlands are a well-established and cost-efficient option for the bioremediation of contaminated waters. While it is commonly accepted knowledge that the function of CWs is determined by the interplay of plants and microorganisms, the detailed molecular processes are considered a black box. Here, we used a well-characterized CW model system treating toluene-contaminated water to investigate the microbial processes influenced by diurnal plant root exudation. Our results indicated stable
Aerodynamic Noise Prediction Using stochastic Turbulence Modeling
Arash Ahmadzadegan
2008-01-01
Full Text Available Amongst many approaches to determine the sound propagated from turbulent flows, hybrid methods, in which the turbulent noise source field is computed or modeled separately from the far field calculation, are frequently used. For basic estimation of sound propagation, less computationally intensive methods can be developed using stochastic models of the turbulent fluctuations (turbulent noise source field. A simple and easy to use stochastic model for generating turbulent velocity fluctuations called continuous filter white noise (CFWN model was used. This method based on the use of classical Langevian-equation to model the details of fluctuating field superimposed on averaged computed quantities. The resulting sound field due to the generated unsteady flow field was evaluated using Lighthill's acoustic analogy. Volume integral method used for evaluating the acoustic analogy. This formulation presents an advantage, as it confers the possibility to determine separately the contribution of the different integral terms and also integration regions to the radiated acoustic pressure. Our results validated by comparing the directivity and the overall sound pressure level (OSPL magnitudes with the available experimental results. Numerical results showed reasonable agreement with the experiments, both in maximum directivity and magnitude of the OSPL. This method presents a very suitable tool for the noise calculation of different engineering problems in early stages of the design process where rough estimates using cheaper methods are needed for different geometries.
Jing Lu
2014-11-01
Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.
RFI modeling and prediction approach for SATOP applications: RFI prediction models
Nguyen, Tien M.; Tran, Hien T.; Wang, Zhonghai; Coons, Amanda; Nguyen, Charles C.; Lane, Steven A.; Pham, Khanh D.; Chen, Genshe; Wang, Gang
2016-05-01
This paper describes a technical approach for the development of RFI prediction models using carrier synchronization loop when calculating Bit or Carrier SNR degradation due to interferences for (i) detecting narrow-band and wideband RFI signals, and (ii) estimating and predicting the behavior of the RFI signals. The paper presents analytical and simulation models and provides both analytical and simulation results on the performance of USB (Unified S-Band) waveforms in the presence of narrow-band and wideband RFI signals. The models presented in this paper will allow the future USB command systems to detect the RFI presence, estimate the RFI characteristics and predict the RFI behavior in real-time for accurate assessment of the impacts of RFI on the command Bit Error Rate (BER) performance. The command BER degradation model presented in this paper also allows the ground system operator to estimate the optimum transmitted SNR to maintain a required command BER level in the presence of both friendly and un-friendly RFI sources.
A mathematical look at a physical power prediction model
Landberg, L. [Riso National Lab., Roskilde (Denmark)
1997-12-31
This paper takes a mathematical look at a physical model used to predict the power produced from wind farms. The reason is to see whether simple mathematical expressions can replace the original equations, and to give guidelines as to where the simplifications can be made and where they can not. This paper shows that there is a linear dependence between the geostrophic wind and the wind at the surface, but also that great care must be taken in the selection of the models since physical dependencies play a very important role, e.g. through the dependence of the turning of the wind on the wind speed.
Model predictive control for Z-source power converter
Mo, W.; Loh, P.C.; Blaabjerg, Frede
2011-01-01
This paper presents Model Predictive Control (MPC) of impedance-source (commonly known as Z-source) power converter. Output voltage control and current control for Z-source inverter are analyzed and simulated. With MPC's ability of multi- system variables regulation, load current and voltage...... of variable switching frequency as well as robustness of transient response can be obtained at the same time with a formulated Z-source network model. Operating steady state and transient state simulation of MPC are going to be presented, which shows good reference tracking ability of this control method....
A mathematical look at a physical power prediction model
Landberg, L.
1998-01-01
This article takes a mathematical look at a physical model used to predict the power produced from wind farms. The reason is to see whether simple mathematical expressions can replace the original equations and to give guidelines as to where simplifications can be made and where they cannot....... The article shows that there is a linear dependence between the geostrophic wind and the local wind at the surface, but also that great care must be taken in the selection of the simple mathematical models, since physical dependences play a very important role, e.g. through the dependence of the turning...
Introducing Model Predictive Control for Improving Power Plant Portfolio Performance
Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon
2008-01-01
This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... reference tracking and disturbance rejection in an economically optimal way. The performance function is chosen as a mixture of the `1-norm and a linear weighting to model the economics of the system. Simulations show a significant improvement of the performance of the MPC compared to the current...
Model Predictive Control for the Operation of Building Cooling Systems
Ma, Yudong; Borrelli, Francesco; Hencey, Brandon; Coffey, Brian; Bengea, Sorin; Haves, Philip
2010-06-29
A model-based predictive control (MPC) is designed for optimal thermal energy storage in building cooling systems. We focus on buildings equipped with a water tank used for actively storing cold water produced by a series of chillers. Typically the chillers are operated at night to recharge the storage tank in order to meet the building demands on the following day. In this paper, we build on our previous work, improve the building load model, and present experimental results. The experiments show that MPC can achieve reduction in the central plant electricity cost and improvement of its efficiency.
Predictive models of procedural human supervisory control behavior
Boussemart, Yves
Human supervisory control systems are characterized by the computer-mediated nature of the interactions between one or more operators and a given task. Nuclear power plants, air traffic management and unmanned vehicles operations are examples of such systems. In this context, the role of the operators is typically highly proceduralized due to the time and mission-critical nature of the tasks. Therefore, the ability to continuously monitor operator behavior so as to detect and predict anomalous situations is a critical safeguard for proper system operation. In particular, such models can help support the decision J]l8king process of a supervisor of a team of operators by providing alerts when likely anomalous behaviors are detected By exploiting the operator behavioral patterns which are typically reinforced through standard operating procedures, this thesis proposes a methodology that uses statistical learning techniques in order to detect and predict anomalous operator conditions. More specifically, the proposed methodology relies on hidden Markov models (HMMs) and hidden semi-Markov models (HSMMs) to generate predictive models of unmanned vehicle systems operators. Through the exploration of the resulting HMMs in two distinct single operator scenarios, the methodology presented in this thesis is validated and shown to provide models capable of reliably predicting operator behavior. In addition, the use of HSMMs on the same data scenarios provides the temporal component of the predictions missing from the HMMs. The final step of this work is to examine how the proposed methodology scales to more complex scenarios involving teams of operators. Adopting a holistic team modeling approach, both HMMs and HSMMs are learned based on two team-based data sets. The results show that the HSMMs can provide valuable timing information in the single operator case, whereas HMMs tend to be more robust to increased team complexity. In addition, this thesis discusses the
Computational neurorehabilitation: modeling plasticity and learning to predict recovery.
Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas
2016-01-01
Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.
Prediction models : the right tool for the right problem
Kappen, Teus H.; Peelen, Linda M.
2016-01-01
PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to unders
Learning Predictive Movement Models From Fabric-Mounted Wearable Sensors.
Michael, Brendan; Howard, Matthew
2016-12-01
The measurement and analysis of human movement for applications in clinical diagnostics or rehabilitation is often performed in a laboratory setting using static motion capture devices. A growing interest in analyzing movement in everyday environments (such as the home) has prompted the development of "wearable sensors", with the most current wearable sensors being those embedded into clothing. A major issue however with the use of these fabric-embedded sensors is the undesired effect of fabric motion artefacts corrupting movement signals. In this paper, a nonparametric method is presented for learning body movements, viewing the undesired motion as stochastic perturbations to the sensed motion, and using orthogonal regression techniques to form predictive models of the wearer's motion that eliminate these errors in the learning process. Experiments in this paper show that standard nonparametric learning techniques underperform in this fabric motion context and that improved prediction accuracy can be made by using orthogonal regression techniques. Modelling this motion artefact problem as a stochastic learning problem shows an average 77% decrease in prediction error in a body pose task using fabric-embedded sensors, compared to a kinematic model.
Reliable Estimation of Prediction Uncertainty for Physicochemical Property Models.
Proppe, Jonny; Reiher, Markus
2017-07-11
One of the major challenges in computational science is to determine the uncertainty of a virtual measurement, that is the prediction of an observable based on calculations. As highly accurate first-principles calculations are in general unfeasible for most physical systems, one usually resorts to parameteric property models of observables, which require calibration by incorporating reference data. The resulting predictions and their uncertainties are sensitive to systematic errors such as inconsistent reference data, parametric model assumptions, or inadequate computational methods. Here, we discuss the calibration of property models in the light of bootstrapping, a sampling method that can be employed for identifying systematic errors and for reliable estimation of the prediction uncertainty. We apply bootstrapping to assess a linear property model linking the (57)Fe Mössbauer isomer shift to the contact electron density at the iron nucleus for a diverse set of 44 molecular iron compounds. The contact electron density is calculated with 12 density functionals across Jacob's ladder (PWLDA, BP86, BLYP, PW91, PBE, M06-L, TPSS, B3LYP, B3PW91, PBE0, M06, TPSSh). We provide systematic-error diagnostics and reliable, locally resolved uncertainties for isomer-shift predictions. Pure and hybrid density functionals yield average prediction uncertainties of 0.06-0.08 mm s(-1) and 0.04-0.05 mm s(-1), respectively, the latter being close to the average experimental uncertainty of 0.02 mm s(-1). Furthermore, we show that both model parameters and prediction uncertainty depend significantly on the composition and number of reference data points. Accordingly, we suggest that rankings of density functionals based on performance measures (e.g., the squared coefficient of correlation, r(2), or the root-mean-square error, RMSE) should not be inferred from a single data set. This study presents the first statistically rigorous calibration analysis for theoretical M
Study on Lumped Kinetic Model for FDFCC II. Validation and Prediction of Model
Wu Feiyue; Weng Huixin; Luo Shixian
2008-01-01
On the basis of formulating the 9-lump kinetic model for gasoline catalytic upgrading and the 12-lump kinetic model for heavy oil FCC, this paper is aimed at development of a combined kinetic model for a typical FDFCC process after analyzing the coupled relationship and combination of these two models. The model is also verified by using commercial data, the results of which showed that the model can better predict the product yields and their quality, with the relative errors between the main products of the unit and commercial data being less than five percent. Furthermore, the combined model is used to predict and optimize the operating conditions for gasoline riser and heavy oil riser in FDFCC. So this paper can offer some guidance for the processing of FDFCC and is instructive to model research and development of such multi-reactor process and combined process.
Recent MEG Results and Predictive SO(10) Models
Fukuyama, Takeshi
2011-01-01
Recent MEG results of a search for the lepton flavor violating (LFV) muon decay, $\\mu \\to e \\gamma$, show 3 events as the best value for the number of signals in the maximally likelihood fit. Although this result is still far from the evidence/discovery in statistical point of view, it might be a sign of a certain new physics beyond the Standard Model. As has been well-known, supersymmetric (SUSY) models can generate the $\\mu \\to e \\gamma$ decay rate within the search reach of the MEG experiment. A certain class of SUSY grand unified theory (GUT) models such as the minimal SUSY SO(10) model (we call this class of models "predictive SO(10) models") can unambiguously determine fermion Yukawa coupling matrices, in particular, the neutrino Dirac Yukawa matrix. Based on the universal boundary conditions for soft SUSY breaking parameters at the GUT scale, we calculate the rate of the $\\mu \\to e \\gamma$ process by using the completely determined Dirac Yukawa matrix in two examples of predictive SO(10) models. If we ...
Prediction model based on decision tree analysis for laccase mediators.
Medina, Fabiola; Aguila, Sergio; Baratto, Maria Camilla; Martorana, Andrea; Basosi, Riccardo; Alderete, Joel B; Vazquez-Duhalt, Rafael
2013-01-10
A Structure Activity Relationship (SAR) study for laccase mediator systems was performed in order to correctly classify different natural phenolic mediators. Decision tree (DT) classification models with a set of five quantum-chemical calculated molecular descriptors were used. These descriptors included redox potential (ɛ°), ionization energy (E(i)), pK(a), enthalpy of formation of radical (Δ(f)H), and OH bond dissociation energy (D(O-H)). The rationale for selecting these descriptors is derived from the laccase-mediator mechanism. To validate the DT predictions, the kinetic constants of different compounds as laccase substrates, their ability for pesticide transformation as laccase-mediators, and radical stability were experimentally determined using Coriolopsis gallica laccase and the pesticide dichlorophen. The prediction capability of the DT model based on three proposed descriptors showed a complete agreement with the obtained experimental results. Copyright © 2012 Elsevier Inc. All rights reserved.
Interpolation techniques in robust constrained model predictive control
Kheawhom, Soorathep; Bumroongsri, Pornchai
2017-05-01
This work investigates interpolation techniques that can be employed on off-line robust constrained model predictive control for a discrete time-varying system. A sequence of feedback gains is determined by solving off-line a series of optimal control optimization problems. A sequence of nested corresponding robustly positive invariant set, which is either ellipsoidal or polyhedral set, is then constructed. At each sampling time, the smallest invariant set containing the current state is determined. If the current invariant set is the innermost set, the pre-computed gain associated with the innermost set is applied. If otherwise, a feedback gain is variable and determined by a linear interpolation of the pre-computed gains. The proposed algorithms are illustrated with case studies of a two-tank system. The simulation results showed that the proposed interpolation techniques significantly improve control performance of off-line robust model predictive control without much sacrificing on-line computational performance.
Predictability of the Indian Ocean Dipole in the coupled models
Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao
2017-03-01
In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.
The Prediction Model of Dam Uplift Pressure Based on Random Forest
Li, Xing; Su, Huaizhi; Hu, Jiang
2017-09-01
The prediction of the dam uplift pressure is of great significance in the dam safety monitoring. Based on the comprehensive consideration of various factors, 18 parameters are selected as the main factors affecting the prediction of uplift pressure, use the actual monitoring data of uplift pressure as the evaluation factors for the prediction model, based on the random forest algorithm and support vector machine to build the dam uplift pressure prediction model to predict the uplift pressure of the dam, and the predict performance of the two models were compared and analyzed. At the same time, based on the established random forest prediction model, the significance of each factor is analyzed, and the importance of each factor of the prediction model is calculated by the importance function. Results showed that: (1) RF prediction model can quickly and accurately predict the uplift pressure value according to the influence factors, the average prediction accuracy is above 96%, compared with the support vector machine (SVM) model, random forest model has better robustness, better prediction precision and faster convergence speed, and the random forest model is more robust to missing data and unbalanced data. (2) The effect of water level on uplift pressure is the largest, and the influence of rainfall on the uplift pressure is the smallest compared with other factors.
Leptogenesis in minimal predictive seesaw models
Björkeroth, Fredrik [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom); Anda, Francisco J. de [Departamento de Física, CUCEI, Universidad de Guadalajara,Guadalajara (Mexico); Varzielas, Ivo de Medeiros; King, Stephen F. [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom)
2015-10-15
We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the “atmospheric” and “solar” neutrino masses with Yukawa couplings to (ν{sub e},ν{sub μ},ν{sub τ}) proportional to (0,1,1) and (1,n,n−2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A{sub 4} vacuum alignment provides the required Yukawa structures with n=3, while a ℤ{sub 9} symmetry fixes the relatives phase to be a ninth root of unity.
QSPR Models for Octane Number Prediction
Jabir H. Al-Fahemi
2014-01-01
Full Text Available Quantitative structure-property relationship (QSPR is performed as a means to predict octane number of hydrocarbons via correlating properties to parameters calculated from molecular structure; such parameters are molecular mass M, hydration energy EH, boiling point BP, octanol/water distribution coefficient logP, molar refractivity MR, critical pressure CP, critical volume CV, and critical temperature CT. Principal component analysis (PCA and multiple linear regression technique (MLR were performed to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The results of PCA explain the interrelationships between octane number and different variables. Correlation coefficients were calculated using M.S. Excel to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The data set was split into training of 40 hydrocarbons and validation set of 25 hydrocarbons. The linear relationship between the selected descriptors and the octane number has coefficient of determination (R2=0.932, statistical significance (F=53.21, and standard errors (s =7.7. The obtained QSPR model was applied on the validation set of octane number for hydrocarbons giving RCV2=0.942 and s=6.328.
Committee neural network model for rock permeability prediction
Bagheripour, Parisa
2014-05-01
Quantitative formulation between conventional well log data and rock permeability, undoubtedly the most critical parameter of hydrocarbon reservoir, could be a potent tool for solving problems associated with almost all tasks involved in petroleum engineering. The present study proposes a novel approach in charge of the quest for high-accuracy method of permeability prediction. At the first stage, overlapping of conventional well log data (inputs) was eliminated by means of principal component analysis (PCA). Subsequently, rock permeability was predicted from extracted PCs using multi-layer perceptron (MLP), radial basis function (RBF), and generalized regression neural network (GRNN). Eventually, a committee neural network (CNN) was constructed by virtue of genetic algorithm (GA) to enhance the precision of ultimate permeability prediction. The values of rock permeability, derived from the MPL, RBF, and GRNN models, were used as inputs of CNN. The proposed CNN combines results of different ANNs to reap beneficial advantages of all models and consequently producing more accurate estimations. The GA, embedded in the structure of the CNN assigns a weight factor to each ANN which shows relative involvement of each ANN in overall prediction of rock permeability from PCs of conventional well logs. The proposed methodology was applied in Kangan and Dalan Formations, which are the major carbonate reservoir rocks of South Pars Gas Field-Iran. A group of 350 data points was used to establish the CNN model, and a group of 245 data points was employed to assess the reliability of constructed CNN model. Results showed that the CNN method performed better than individual intelligent systems performing alone.
Modeling and predicting historical volatility in exchange rate markets
Lahmiri, Salim
2017-04-01
Volatility modeling and forecasting of currency exchange rate is an important task in several business risk management tasks; including treasury risk management, derivatives pricing, and portfolio risk evaluation. The purpose of this study is to present a simple and effective approach for predicting historical volatility of currency exchange rate. The approach is based on a limited set of technical indicators as inputs to the artificial neural networks (ANN). To show the effectiveness of the proposed approach, it was applied to forecast US/Canada and US/Euro exchange rates volatilities. The forecasting results show that our simple approach outperformed the conventional GARCH and EGARCH with different distribution assumptions, and also the hybrid GARCH and EGARCH with ANN in terms of mean absolute error, mean of squared errors, and Theil's inequality coefficient. Because of the simplicity and effectiveness of the approach, it is promising for US currency volatility prediction tasks.
Hybrid Model for Early Onset Prediction of Driver Fatigue with Observable Cues
Mingheng Zhang
2014-01-01
Full Text Available This paper presents a hybrid model for early onset prediction of driver fatigue, which is the major reason of severe traffic accidents. The proposed method divides the prediction problem into three stages, that is, SVM-based model for predicting the early onset driver fatigue state, GA-based model for optimizing the parameters in the SVM, and PCA-based model for reducing the dimensionality of the complex features datasets. The model and algorithm are illustrated with driving experiment data and comparison results also show that the hybrid method can generally provide a better performance for driver fatigue state prediction.
A grey NGM(1,1, k) self-memory coupling prediction model for energy consumption prediction.
Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling
2014-01-01
Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.
A Grey NGM(1,1, k) Self-Memory Coupling Prediction Model for Energy Consumption Prediction
Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling
2014-01-01
Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span. PMID:25054174
Vortexlet models of flapping flexible wings show tuning for force production and control
Mountcastle, A M [Department of Organismic and Evolutionary Biology, Harvard University, Concord Field Station, Bedford, MA 01730 (United States); Daniel, T L, E-mail: mtcastle@u.washington.ed [Department of Biology, University of Washington, Seattle, WA 98195 (United States)
2010-12-15
Insect wings are compliant structures that experience deformations during flight. Such deformations have recently been shown to substantially affect induced flows, with appreciable consequences to flight forces. However, there are open questions related to the aerodynamic mechanisms underlying the performance benefits of wing deformation, as well as the extent to which such deformations are determined by the boundary conditions governing wing actuation together with mechanical properties of the wing itself. Here we explore aerodynamic performance parameters of compliant wings under periodic oscillations, subject to changes in phase between wing elevation and pitch, and magnitude and spatial pattern of wing flexural stiffness. We use a combination of computational structural mechanics models and a 2D computational fluid dynamics approach to ask how aerodynamic force production and control potential are affected by pitch/elevation phase and variations in wing flexural stiffness. Our results show that lift and thrust forces are highly sensitive to flexural stiffness distributions, with performance optima that lie in different phase regions. These results suggest a control strategy for both flying animals and engineering applications of micro-air vehicles.
Xin Chen
2013-07-01
Full Text Available Background: Melanoma is considered as one of the most aggressive and deadliest cancers and current targeted therapies of melanoma often suffer limited efficacy or drug resistance. Discovery of novel multikinase inhibitors as anti-melanoma drug candidates is still needed. Methods: In this investigation, we assessed the in vitro and in vivo anti-melanoma activities of SC-535, which is a novel small molecule multikinase inhibitor discovered by us recently. We analyzed inhibitory effects of SC-535 on various melanoma cell lines and human umbilical vascular endothelial cells (HUVEC in vitro. Tumor xenografts in athymic mice were used to examine the in vivo activity of SC-535. Results: SC-535 could efficiently inhibit vascular endothelial growth factor receptor (VEGFR 1/2/3, B-RAF, and C-RAF kinases. It showed significant antiangiogenic potencies both in vitro and in vivo and considerable anti-proliferative ability against several melanoma cell lines. Oral administration of SC-535 resulted in dose-dependent suppression of tumor growth in WM2664 and C32 xenograft mouse models. Studies of mechanisms of action indicated that SC-535 suppressed the tumor angiogenesis and induced G2/M phase cell cycle arrest in human melanoma cells. SC-535 possesses favorable pharmacokinetic properties. Conclusion: All of these results support SC-535 as a potential candidate for clinical studies in patients with melanoma.
Rastafa I Geddes
Full Text Available We recently showed that progesterone treatment can reduce lesion size and behavioral deficits after moderate-to-severe bilateral injury to the medial prefrontal cortex in immature male rats. Whether there are important sex differences in response to injury and progesterone treatment in very young subjects has not been given sufficient attention. Here we investigated progesterone's effects in the same model of brain injury but with pre-pubescent females.Twenty-eight-day-old female Sprague-Dawley rats received sham (n = 14 or controlled cortical impact (CCI (n = 21 injury, were given progesterone (8 mg/kg body weight or vehicle injections on post-injury days (PID 1-7, and underwent behavioral testing from PID 9-27. Brains were evaluated for lesion size at PID 28.Lesion size in vehicle-treated female rats with CCI injury was smaller than that previously reported for similarly treated age-matched male rats. Treatment with progesterone reduced the effect of CCI on extent of damage and behavioral deficits.Pre-pubescent female rats with midline CCI injury to the frontal cortex have reduced morphological and functional deficits following progesterone treatment. While gender differences in susceptibility to this injury were observed, progesterone treatment produced beneficial effects in young rats of both sexes following CCI.
2016-01-01
Visitors look at plane models of the Commercial Aircraft Corp. of China, developer of the count,s first homegrown large passenger jet C919, during the Singapore Airshow on February 16. The biennial event is the largest airshow in Asia and one of the most important aviation and defense shows worldwide. A number of Chinese companies took part in the event during which Okay Airways, the first privately owned aidine in China, signed a deal to acquire 12 Boeing 737 jets.
Stochastic Models Predict User Behavior in Social Media
Hogg, Tad; Smith, Laura M
2013-01-01
User response to contributed content in online social media depends on many factors. These include how the site lays out new content, how frequently the user visits the site, how many friends the user follows, how active these friends are, as well as how interesting or useful the content is to the user. We present a stochastic modeling framework that relates a user's behavior to details of the site's user interface and user activity and describe a procedure for estimating model parameters from available data. We apply the model to study discussions of controversial topics on Twitter, specifically, to predict how followers of an advocate for a topic respond to the advocate's posts. We show that a model of user behavior that explicitly accounts for a user transitioning through a series of states before responding to an advocate's post better predicts response than models that fail to take these states into account. We demonstrate other benefits of stochastic models, such as their ability to identify users who a...
Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate
Minh, Vu Trieu; Katushin, Dmitri; Antonov, Maksim; Veinthal, Renno
2017-03-01
This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM) based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), rock brittleness index (BI), the distance between planes of weakness (DPW), and the alpha angle (Alpha) between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP). Four (4) statistical regression models (two linear and two nonlinear) are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2) of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.
Structure-Based Predictive model for Coal Char Combustion.
Hurt, R.; Colo, J [Brown Univ., Providence, RI (United States). Div. of Engineering; Essenhigh, R.; Hadad, C [Ohio State Univ., Columbus, OH (United States). Dept. of Chemistry; Stanley, E. [Boston Univ., MA (United States). Dept. of Physics
1997-09-24
During the third quarter of this project, progress was made on both major technical tasks. Progress was made in the chemistry department at OSU on the calculation of thermodynamic properties for a number of model organic compounds. Modelling work was carried out at Brown to adapt a thermodynamic model of carbonaceous mesophase formation, originally applied to pitch carbonization, to the prediction of coke texture in coal combustion. This latter work makes use of the FG-DVC model of coal pyrolysis developed by Advanced Fuel Research to specify the pool of aromatic clusters that participate in the order/disorder transition. This modelling approach shows promise for the mechanistic prediction of the rank dependence of char structure and will therefore be pursued further. Crystalline ordering phenomena were also observed in a model char prepared from phenol-formaldehyde carbonized at 900{degrees}C and 1300{degrees}C using high-resolution TEM fringe imaging. Dramatic changes occur in the structure between 900 and 1300{degrees}C, making this char a suitable candidate for upcoming in situ work on the hot stage TEM. Work also proceeded on molecular dynamics simulations at Boston University and on equipment modification and testing for the combustion experiments with widely varying flame types at Ohio State.
Predictability in models of the atmospheric circulation.
Houtekamer, P.L.
1992-01-01
It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error are. The
Infiltration under snow cover: Modeling approaches and predictive uncertainty
Meeks, Jessica; Moeck, Christian; Brunner, Philip; Hunkeler, Daniel
2017-03-01
Groundwater recharge from snowmelt represents a temporal redistribution of precipitation. This is extremely important because the rate and timing of snowpack drainage has substantial consequences to aquifer recharge patterns, which in turn affect groundwater availability throughout the rest of the year. The modeling methods developed to estimate drainage from a snowpack, which typically rely on temporally-dense point-measurements or temporally-limited spatially-dispersed calibration data, range in complexity from the simple degree-day method to more complex and physically-based energy balance approaches. While the gamut of snowmelt models are routinely used to aid in water resource management, a comparison of snowmelt models' predictive uncertainties had previously not been done. Therefore, we established a snowmelt model calibration dataset that is both temporally dense and represents the integrated snowmelt infiltration signal for the Vers Chez le Brandt research catchment, which functions as a rather unique natural lysimeter. We then evaluated the uncertainty associated with the degree-day, a modified degree-day and energy balance snowmelt model predictions using the null-space Monte Carlo approach. All three melt models underestimate total snowpack drainage, underestimate the rate of early and midwinter drainage and overestimate spring snowmelt rates. The actual rate of snowpack water loss is more constant over the course of the entire winter season than the snowmelt models would imply, indicating that mid-winter melt can contribute as significantly as springtime snowmelt to groundwater recharge in low alpine settings. Further, actual groundwater recharge could be between 2 and 31% greater than snowmelt models suggest, over the total winter season. This study shows that snowmelt model predictions can have considerable uncertainty, which may be reduced by the inclusion of more data that allows for the use of more complex approaches such as the energy balance
Showing a model's eye movements in examples does not improve learning of problem-solving tasks
van Marlen, Tim; van Wermeskerken, Margot; Jarodzka, Halszka; van Gog, Tamara
2016-01-01
Eye movement modeling examples (EMME) are demonstrations of a computer-based task by a human model (e.g., a teacher), with the model's eye movements superimposed on the task to guide learners' attention. EMME have been shown to enhance learning of perceptual classification tasks; however, it is an
Showing a model's eye movements in examples does not improve learning of problem-solving tasks
van Marlen, Tim; van Wermeskerken, Margot; Jarodzka, Halszka; van Gog, Tamara
2016-01-01
Eye movement modeling examples (EMME) are demonstrations of a computer-based task by a human model (e.g., a teacher), with the model's eye movements superimposed on the task to guide learners' attention. EMME have been shown to enhance learning of perceptual classification tasks; however, it is an o
What Can the Bohr-Sommerfeld Model Show Students of Chemistry in the 21st Century?
Niaz, Mansoor; Cardellini, Liberato
2011-01-01
Bohr's model of the atom is considered to be important by general chemistry textbooks. A shortcoming of this model was that it could not explain the spectra of atoms containing more than one electron. To increase the explanatory power of the model, Sommerfeld hypothesized the existence of elliptical orbits. This study aims to elaborate a framework…
Predicting the Probability of Lightning Occurrence with Generalized Additive Models
Fabsic, Peter; Mayr, Georg; Simon, Thorsten; Zeileis, Achim
2017-04-01
This study investigates the predictability of lightning in complex terrain. The main objective is to estimate the probability of lightning occurrence in the Alpine region during summertime afternoons (12-18 UTC) at a spatial resolution of 64 × 64 km2. Lightning observations are obtained from the ALDIS lightning detection network. The probability of lightning occurrence is estimated using generalized additive models (GAM). GAMs provide a flexible modelling framework to estimate the relationship between covariates and the observations. The covariates, besides spatial and temporal effects, include numerous meteorological fields from the ECMWF ensemble system. The optimal model is chosen based on a forward selection procedure with out-of-sample mean squared error as a performance criterion. Our investigation shows that convective precipitation and mid-layer stability are the most influential meteorological predictors. Both exhibit intuitive, non-linear trends: higher values of convective precipitation indicate higher probability of lightning, and large values of the mid-layer stability measure imply low lightning potential. The performance of the model was evaluated against a climatology model containing both spatial and temporal effects. Taking the climatology model as a reference forecast, our model attains a Brier Skill Score of approximately 46%. The model's performance can be further enhanced by incorporating the information about lightning activity from the previous time step, which yields a Brier Skill Score of 48%. These scores show that the method is able to extract valuable information from the ensemble to produce reliable spatial forecasts of the lightning potential in the Alps.
Models for Prediction, Explanation and Control: Recursive Bayesian Networks
Lorenzo Casini
2011-01-01
Full Text Available The Recursive Bayesian Net (RBN formalism was originally developed for modelling nested causal relationships. In this paper we argue that the formalism can also be applied to modelling the hierarchical structure of mechanisms. The resulting network contains quantitative information about probabilities, as well as qualitative information about mechanistic structure and causal relations. Since information about probabilities, mechanisms and causal relations is vital for prediction, explanation and control respectively, an RBN can be applied to all these tasks. We show in particular how a simple two-level RBN can be used to model a mechanism in cancer science. The higher level of our model contains variables at the clinical level, while the lower level maps the structure of the cell's mechanism for apoptosis.
Gao, Jie; Wang, Yi; Wargocki, Pawel
2015-01-01
In this paper, a comparative analysis was performed on the human thermal sensation estimated by modified predicted mean vote (PMV) models and modified standard effective temperature (SET) models in naturally ventilated buildings; the data were collected in field study. These prediction models were...... between the measured and predicted values using the modified PMV models exceeded 25%, while the difference between the measured thermal sensation and the predicted thermal sensation using modified SET models was approximately less than 25%. It is concluded that the modified SET models can predict human...... developed on the basis of the original PMV/SET models and consider the influence of occupants' expectations and human adaptive functions, including the extended PMV/SET models and the adaptive PMV/SET models. The results showed that when the indoor air velocity ranged from 0 to 0.2m/s and from 0.2 to 0.8m...
LU Xiao-li
2012-01-01
I select 32 samples concerning per capita living consumption of rural residents in Sichuan Province during the period 1978-2009. First, using Markov prediction method, the growth rate of living consumption level in the future is predicted to largely range from 10% to 20%. Then, in order to improve the prediction accuracy, time variable t is added into the traditional ARMA model for modeling and prediction. The prediction results show that the average relative error rate is 1.56%, and the absolute value of relative error during the period 2006-2009 is less than 0.5%. Finally, I compare the prediction results during the period 2010-2012 by Markov prediction method and ARMA model, respectively, indicating that the two are consistent in terms of growth rate of living consumption, and the prediction results are reliable. The results show that under the similar policies, rural residents’ consumer demand in Sichuan Province will continue to grow in the short term, so it is necessary to further expand the consumer market.
Allostasis: a model of predictive regulation.
Sterling, Peter
2012-04-12
The premise of the standard regulatory model, "homeostasis", is flawed: the goal of regulation is not to preserve constancy of the internal milieu. Rather, it is to continually adjust the milieu to promote survival and reproduction. Regulatory mechanisms need to be efficient, but homeostasis (error-correction by feedback) is inherently inefficient. Thus, although feedbacks are certainly ubiquitous, they could not possibly serve as the primary regulatory mechanism. A newer model, "allostasis", proposes that efficient regulation requires anticipating needs and preparing to satisfy them before they arise. The advantages: (i) errors are reduced in magnitude and frequency; (ii) response capacities of different components are matched -- to prevent bottlenecks and reduce safety factors; (iii) resources are shared between systems to minimize reserve capacities; (iv) errors are remembered and used to reduce future errors. This regulatory strategy requires a dedicated organ, the brain. The brain tracks multitudinous variables and integrates their values with prior knowledge to predict needs and set priorities. The brain coordinates effectors to mobilize resources from modest bodily stores and enforces a system of flexible trade-offs: from each organ according to its ability, to each organ according to its need. The brain also helps regulate the internal milieu by governing anticipatory behavior. Thus, an animal conserves energy by moving to a warmer place - before it cools, and it conserves salt and water by moving to a cooler one before it sweats. The behavioral strategy requires continuously updating a set of specific "shopping lists" that document the growing need for each key component (warmth, food, salt, water). These appetites funnel into a common pathway that employs a "stick" to drive the organism toward filling the need, plus a "carrot" to relax the organism when the need is satisfied. The stick corresponds broadly to the sense of anxiety, and the carrot broadly to
An infinitesimal model for quantitative trait genomic value prediction.
Zhiqiu Hu
Full Text Available We developed a marker based infinitesimal model for quantitative trait analysis. In contrast to the classical infinitesimal model, we now have new information about the segregation of every individual locus of the entire genome. Under this new model, we propose that the genetic effect of an individual locus is a function of the genome location (a continuous quantity. The overall genetic value of an individual is the weighted integral of the genetic effect function along the genome. Numerical integration is performed to find the integral, which requires partitioning the entire genome into a finite number of bins. Each bin may contain many markers. The integral is approximated by the weighted sum of all the bin effects. We now turn the problem of marker analysis into bin analysis so that the model dimension has decreased from a virtual infinity to a finite number of bins. This new approach can efficiently handle virtually unlimited number of markers without marker selection. The marker based infinitesimal model requires high linkage disequilibrium of all markers within a bin. For populations with low or no linkage disequilibrium, we develop an adaptive infinitesimal model. Both the original and the adaptive models are tested using simulated data as well as beef cattle data. The simulated data analysis shows that there is always an optimal number of bins at which the predictability of the bin model is much greater than the original marker analysis. Result of the beef cattle data analysis indicates that the bin model can increase the predictability from 10% (multiple marker analysis to 33% (multiple bin analysis. The marker based infinitesimal model paves a way towards the solution of genetic mapping and genomic selection using the whole genome sequence data.
Bayesian prediction of placebo analgesia in an instrumental learning model
Jung, Won-Mo; Lee, Ye-Seul; Wallraven, Christian; Chae, Younbyoung
2017-01-01
Placebo analgesia can be primarily explained by the Pavlovian conditioning paradigm in which a passively applied cue becomes associated with less pain. In contrast, instrumental conditioning employs an active paradigm that might be more similar to clinical settings. In the present study, an instrumental conditioning paradigm involving a modified trust game in a simulated clinical situation was used to induce placebo analgesia. Additionally, Bayesian modeling was applied to predict the placebo responses of individuals based on their choices. Twenty-four participants engaged in a medical trust game in which decisions to receive treatment from either a doctor (more effective with high cost) or a pharmacy (less effective with low cost) were made after receiving a reference pain stimulus. In the conditioning session, the participants received lower levels of pain following both choices, while high pain stimuli were administered in the test session even after making the decision. The choice-dependent pain in the conditioning session was modulated in terms of both intensity and uncertainty. Participants reported significantly less pain when they chose the doctor or the pharmacy for treatment compared to the control trials. The predicted pain ratings based on Bayesian modeling showed significant correlations with the actual reports from participants for both of the choice categories. The instrumental conditioning paradigm allowed for the active choice of optional cues and was able to induce the placebo analgesia effect. Additionally, Bayesian modeling successfully predicted pain ratings in a simulated clinical situation that fits well with placebo analgesia induced by instrumental conditioning. PMID:28225816
Improving active space telescope wavefront control using predictive thermal modeling
Gersh-Range, Jessica; Perrin, Marshall D.
2015-01-01
Active control algorithms for space telescopes are less mature than those for large ground telescopes due to differences in the wavefront control problems. Active wavefront control for space telescopes at L2, such as the James Webb Space Telescope (JWST), requires weighing control costs against the benefits of correcting wavefront perturbations that are a predictable byproduct of the observing schedule, which is known and determined in advance. To improve the control algorithms for these telescopes, we have developed a model that calculates the temperature and wavefront evolution during a hypothetical mission, assuming the dominant wavefront perturbations are due to changes in the spacecraft attitude with respect to the sun. Using this model, we show that the wavefront can be controlled passively by introducing scheduling constraints that limit the allowable attitudes for an observation based on the observation duration and the mean telescope temperature. We also describe the implementation of a predictive controller designed to prevent the wavefront error (WFE) from exceeding a desired threshold. This controller outperforms simpler algorithms even with substantial model error, achieving a lower WFE without requiring significantly more corrections. Consequently, predictive wavefront control based on known spacecraft attitude plans is a promising approach for JWST and other future active space observatories.
Required Collaborative Work in Online Courses: A Predictive Modeling Approach
Smith, Marlene A.; Kellogg, Deborah L.
2015-01-01
This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…
A prediction model for assessing residential radon concentration in Switzerland
Hauri, D.D.; Huss, A.; Zimmermann, F.; Kuehni, C.E.; Roosli, M.
2012-01-01
Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the
Distributional Analysis for Model Predictive Deferrable Load Control
Chen, Niangjun; Gan, Lingwen; Low, Steven H.; Wierman, Adam
2014-01-01
Deferrable load control is essential for handling the uncertainties associated with the increasing penetration of renewable generation. Model predictive control has emerged as an effective approach for deferrable load control, and has received considerable attention. In particular, previous work has analyzed the average-case performance of model predictive deferrable load control. However, to this point, distributional analysis of model predictive deferrable load control has been elusive. In ...
Predictive Models for Photovoltaic Electricity Production in Hot Weather Conditions
Jabar H. Yousif
2017-07-01
Full Text Available The process of finding a correct forecast equation for photovoltaic electricity production from renewable sources is an important matter, since knowing the factors affecting the increase in the proportion of renewable energy production and reducing the cost of the product has economic and scientific benefits. This paper proposes a mathematical model for forecasting energy production in photovoltaic (PV panels based on a self-organizing feature map (SOFM model. The proposed model is compared with other models, including the multi-layer perceptron (MLP and support vector machine (SVM models. Moreover, a mathematical model based on a polynomial function for fitting the desired output is proposed. Different practical measurement methods are used to validate the findings of the proposed neural and mathematical models such as mean square error (MSE, mean absolute error (MAE, correlation (R, and coefficient of determination (R2. The proposed SOFM model achieved a final MSE of 0.0007 in the training phase and 0.0005 in the cross-validation phase. In contrast, the SVM model resulted in a small MSE value equal to 0.0058, while the MLP model achieved a final MSE of 0.026 with a correlation coefficient of 0.9989, which indicates a strong relationship between input and output variables. The proposed SOFM model closely fits the desired results based on the R2 value, which is equal to 0.9555. Finally, the comparison results of MAE for the three models show that the SOFM model achieved a best result of 0.36156, whereas the SVM and MLP models yielded 4.53761 and 3.63927, respectively. A small MAE value indicates that the output of the SOFM model closely fits the actual results and predicts the desired output.
Predicting the ungauged basin: model validation and realism assessment
van Emmerik, Tim; Mulder, Gert; Eilander, Dirk; Piet, Marijn; Savenije, Hubert
2016-04-01
The hydrological decade on Predictions in Ungauged Basins (PUB) [1] led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of model outcome has not been discussed to a great extent. With this study [2] we aim to contribute to the discussion on how one can determine the value and validity of a hydrological model developed for an ungauged basin. As in many cases no local, or even regional, data are available, alternative methods should be applied. Using a PUB case study in a genuinely ungauged basin in southern Cambodia, we give several examples of how one can use different types of soft data to improve model design, calibrate and validate the model, and assess the realism of the model output. A rainfall-runoff model was coupled to an irrigation reservoir, allowing the use of additional and unconventional data. The model was mainly forced with remote sensing data, and local knowledge was used to constrain the parameters. Model realism assessment was done using data from surveys. This resulted in a successful reconstruction of the reservoir dynamics, and revealed the different hydrological characteristics of the two topographical classes. We do not present a generic approach that can be transferred to other ungauged catchments, but we aim to show how clever model design and alternative data acquisition can result in a valuable hydrological model for ungauged catchments. [1] Sivapalan, M., Takeuchi, K., Franks, S., Gupta, V., Karambiri, H., Lakshmi, V., et al. (2003). IAHS decade on predictions in ungauged basins (PUB), 2003-2012: shaping an exciting future for the hydrological sciences. Hydrol. Sci. J. 48, 857-880. doi: 10.1623/hysj.48.6.857.51421 [2] van Emmerik, T., Mulder, G., Eilander, D., Piet, M. and Savenije, H. (2015). Predicting the ungauged basin: model validation and realism assessment
Regional differences in prediction models of lung function in Germany
Schäper Christoph
2010-04-01
Full Text Available Abstract Background Little is known about the influencing potential of specific characteristics on lung function in different populations. The aim of this analysis was to determine whether lung function determinants differ between subpopulations within Germany and whether prediction equations developed for one subpopulation are also adequate for another subpopulation. Methods Within three studies (KORA C, SHIP-I, ECRHS-I in different areas of Germany 4059 adults performed lung function tests. The available data consisted of forced expiratory volume in one second, forced vital capacity and peak expiratory flow rate. For each study multivariate regression models were developed to predict lung function and Bland-Altman plots were established to evaluate the agreement between predicted and measured values. Results The final regression equations for FEV1 and FVC showed adjusted r-square values between 0.65 and 0.75, and for PEF they were between 0.46 and 0.61. In all studies gender, age, height and pack-years were significant determinants, each with a similar effect size. Regarding other predictors there were some, although not statistically significant, differences between the studies. Bland-Altman plots indicated that the regression models for each individual study adequately predict medium (i.e. normal but not extremely high or low lung function values in the whole study population. Conclusions Simple models with gender, age and height explain a substantial part of lung function variance whereas further determinants add less than 5% to the total explained r-squared, at least for FEV1 and FVC. Thus, for different adult subpopulations of Germany one simple model for each lung function measures is still sufficient.
Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models
Cheng-Hung Hsieh
2007-09-01
Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.
On hydrological model complexity, its geometrical interpretations and prediction uncertainty
Arkesteijn, E.C.M.M.; Pande, S.
2013-01-01
Knowledge of hydrological model complexity can aid selection of an optimal prediction model out of a set of available models. Optimal model selection is formalized as selection of the least complex model out of a subset of models that have lower empirical risk. This may be considered equivalent to
Using Topic Modeling and Text Embeddings to Predict Deleted Tweets
Potash, Peter J.; Bell, Eric B.; Harrison, Joshua J.
2016-02-29
Predictive models for tweet deletion have been a relatively unexplored area of Twitter-related computational research. We first approach the deletion of tweets as a spam detection problem, applying a small set of handcrafted features to improve upon the current state-of-the- art in predicting deleted tweets. Next, we apply our approach to a dataset of deleted tweets that better reflects the current deletion rate. Since tweets are deleted for reasons beyond just the presence of spam, we apply topic modeling and text embeddings in order to capture the semantic content of tweets that can lead to tweet deletion. Our goal is to create an effective model that has a low-dimensional feature space and is also language-independent. A lean model would be computationally advantageous processing high-volumes of Twitter data, which can reach 9,885 tweets per second. Our results show that a small set of spam-related features combined with word topics and character-level text embeddings provide the best f1 when trained with a random forest model. The highest precision of the deleted tweet class is achieved by a modification of paragraph2vec to capture author identity.
Simple predictive electron transport models applied to sawtoothing plasmas
Kim, D.; Merle, A.; Sauter, O.; Goodman, T. P.
2016-05-01
In this work, we introduce two simple transport models to evaluate the time evolution of electron temperature and density profiles during sawtooth cycles (i.e. over a sawtooth period time-scale). Since the aim of these simulations is to estimate reliable profiles within a short calculation time, two simplified ad-hoc models have been developed. The goal for these models is to rely on a few easy-to-check free parameters, such as the confinement time scaling factor and the profiles’ averaged scale-lengths. Due to the simplicity and short calculation time of the models, it is expected that these models can also be applied to real-time transport simulations. We show that it works well for Ohmic and EC heated L- and H-mode plasmas. The differences between these models are discussed and we show that their predictive capabilities are similar. Thus only one model is used to reproduce with simulations the results of sawtooth control experiments on the TCV tokamak. For the sawtooth pacing, the calculated time delays between the EC power off and sawtooth crash time agree well with the experimental results. The map of possible locking range is also well reproduced by the simulation.
Predictive modeling of dental pain using neural network.
Kim, Eun Yeob; Lim, Kun Ok; Rhee, Hyun Sill
2009-01-01
The mouth is a part of the body for ingesting food that is the most basic foundation and important part. The dental pain predicted by the neural network model. As a result of making a predictive modeling, the fitness of the predictive modeling of dental pain factors was 80.0%. As for the people who are likely to experience dental pain predicted by the neural network model, preventive measures including proper eating habits, education on oral hygiene, and stress release must precede any dental treatment.
Mixing height computation from a numerical weather prediction model
Jericevic, A. [Croatian Meteorological and Hydrological Service, Zagreb (Croatia); Grisogono, B. [Univ. of Zagreb, Zagreb (Croatia). Andrija Mohorovicic Geophysical Inst., Faculty of Science
2004-07-01
Dispersion models require hourly values of the mixing height, H, that indicates the existence of turbulent mixing. The aim of this study was to investigate a model ability and characteristics in the prediction of H. The ALADIN, limited area numerical weather prediction (NWP) model for short-range 48-hour forecasts was used. The bulk Richardson number (R{sub iB}) method was applied to determine the height of the atmospheric boundary layer at one grid point nearest to Zagreb, Croatia. This specific location was selected because there were available radio soundings and the verification of the model could be done. Critical value of bulk Richardson number R{sub iBc}=0.3 was used. The values of H, modelled and measured, for 219 days at 12 UTC are compared, and the correlation coefficient of 0.62 is obtained. This indicates that ALADIN can be used for the calculation of H in the convective boundary layer. For the stable boundary layer (SBL), the model underestimated H systematically. Results showed that R{sub iBc} evidently increases with the increase of stability. Decoupling from the surface in the very SBL was detected, which is a consequence of the flow ease resulting in R{sub iB} becoming very large. Verification of the practical usage of the R{sub iB} method for H calculations from NWP model was performed. The necessity for including other stability parameters (e.g., surface roughness length) was evidenced. Since ALADIN model is in operational use in many European countries, this study would help the others in pre-processing NWP data for input to dispersion models. (orig.)
Jorgensen, Stephen R.; Sonstegard, Janet S.
1984-01-01
Presents a test of the Fishbein model of behavior prediction applied to predict the pregnancy risk-taking behavior of adolescent females (N=244). Analyses of data showed that the Fishbein model of attitude-behavior consistency seems to be applicable to the fertility-related behavior of adolescent females. (LLL)
Mihaela Simionescu
2014-12-01
Full Text Available There are many types of econometric models used in predicting the inflation rate, but in this study we used a Bayesian shrinkage combination approach. This methodology is used in order to improve the predictions accuracy by including information that is not captured by the econometric models. Therefore, experts’ forecasts are utilized as prior information, for Romania these predictions being provided by Institute for Economic Forecasting (Dobrescu macromodel, National Commission for Prognosis and European Commission. The empirical results for Romanian inflation show the superiority of a fixed effects model compared to other types of econometric models like VAR, Bayesian VAR, simultaneous equations model, dynamic model, log-linear model. The Bayesian combinations that used experts’ predictions as priors, when the shrinkage parameter tends to infinite, improved the accuracy of all forecasts based on individual models, outperforming also zero and equal weights predictions and naïve forecasts.
Prediction of peptide bonding affinity: kernel methods for nonlinear modeling
Bergeron, Charles; Sundling, C Matthew; Krein, Michael; Katt, Bill; Sukumar, Nagamani; Breneman, Curt M; Bennett, Kristin P
2011-01-01
This paper presents regression models obtained from a process of blind prediction of peptide binding affinity from provided descriptors for several distinct datasets as part of the 2006 Comparative Evaluation of Prediction Algorithms (COEPRA) contest. This paper finds that kernel partial least squares, a nonlinear partial least squares (PLS) algorithm, outperforms PLS, and that the incorporation of transferable atom equivalent features improves predictive capability.
Huili eYuan
2016-04-01
Full Text Available The biomass composition represented in constraint-based metabolic models is a key component for predicting cellular metabolism using flux balance analysis (FBA. Despite major advances in analytical technologies, it is often challenging to obtain a detailed composition of all major biomass components experimentally. Studies examining the influence of the biomass composition on the predictions of metabolic models have so far mostly been done on models of microorganisms. Little is known about the impact of varying biomass composition on flux prediction in FBA models of plants, whose metabolism is very versatile and complex because of the presence of multiple subcellular compartments. Also, the published metabolic models of plants differ in size and complexity. In this study, we examined the sensitivity of the predicted fluxes of plant metabolic models to biomass composition and model structure. These questions were addressed by evaluating the sensitivity of predictions of growth rates and central carbon metabolic fluxes to varying biomass compositions in three different genome-/large-scale metabolic models of Arabidopsis thaliana. Our results showed that fluxes through the central carbon metabolism were robust to changes in biomass composition. Nevertheless, comparisons between the predictions from three models using identical modelling constraints and objective function showed that model predictions were sensitive to the structure of the models, highlighting large discrepancies between the published models.
Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations
Bowman, J.; Jensen, S.; McDonald, Mark
2010-10-01
High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.
Predicting and understanding forest dynamics using a simple tractable model.
Purves, Drew W; Lichstein, Jeremy W; Strigul, Nikolay; Pacala, Stephen W
2008-11-04
The perfect-plasticity approximation (PPA) is an analytically tractable model of forest dynamics, defined in terms of parameters for individual trees, including allometry, growth, and mortality. We estimated these parameters for the eight most common species on each of four soil types in the US Lake states (Michigan, Wisconsin, and Minnesota) by using short-term (predictions to chronosequences of stand development. Predictions for the timing and magnitude of basal area dynamics and ecological succession on each soil were accurate, and predictions for the diameter distribution of 100-year-old stands were correct in form and slope. For a given species, the PPA provides analytical metrics for early-successional performance (H(20), height of a 20-year-old open-grown tree) and late-successional performance (Z*, equilibrium canopy height in monoculture). These metrics predicted which species were early or late successional on each soil type. Decomposing Z* showed that (i) succession is driven both by superior understory performance and superior canopy performance of late-successional species, and (ii) performance differences primarily reflect differences in mortality rather than growth. The predicted late-successional dominants matched chronosequences on xeromesic (Quercus rubra) and mesic (codominance by Acer rubrum and Acer saccharum) soil. On hydromesic and hydric soils, the literature reports that the current dominant species in old stands (Thuja occidentalis) is now failing to regenerate. Consistent with this, the PPA predicted that, on these soils, stands are now succeeding to dominance by other late-successional species (e.g., Fraxinus nigra, A. rubrum).
Comparisons of Faulting-Based Pavement Performance Prediction Models
Weina Wang
2017-01-01
Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.
Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.
Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh
2014-07-01
This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management.
Fuzzy predictive filtering in nonlinear economic model predictive control for demand response
Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.;
2016-01-01
The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...... problem. Moreover, to reduce the computation time and improve the controller's performance, a fuzzy predictive filter is introduced. With the purpose of testing the developed EMPC, a simulation controlling the temperature levels of an intelligent office building (PowerFlexHouse), with and without fuzzy...
Predictive modeling and reducing cyclic variability in autoignition engines
Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob
2016-08-30
Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.
Mutual information model for link prediction in heterogeneous complex networks
Shakibian, Hadi; Moghadam Charkari, Nasrollah
2017-01-01
Recently, a number of meta-path based similarity indices like PathSim, HeteSim, and random walk have been proposed for link prediction in heterogeneous complex networks. However, these indices suffer from two major drawbacks. Firstly, they are primarily dependent on the connectivity degrees of node pairs without considering the further information provided by the given meta-path. Secondly, most of them are required to use a single and usually symmetric meta-path in advance. Hence, employing a set of different meta-paths is not straightforward. To tackle with these problems, we propose a mutual information model for link prediction in heterogeneous complex networks. The proposed model, called as Meta-path based Mutual Information Index (MMI), introduces meta-path based link entropy to estimate the link likelihood and could be carried on a set of available meta-paths. This estimation measures the amount of information through the paths instead of measuring the amount of connectivity between the node pairs. The experimental results on a Bibliography network show that the MMI obtains high prediction accuracy compared with other popular similarity indices. PMID:28344326
钟伟民; 何国龙; 皮道映; 孙优贤
2005-01-01
A support vector machine (SVM) with quadratic polynomial kernel function based nonlinear model one-step-ahead predictive controller is presented. The SVM based predictive model is established with black-box identification method. By solving a cubic equation in the feature space, an explicit predictive control law is obtained through the predictive control mechanism. The effect of controller is demonstrated on a recognized benchmark problem and on the control of continuous-stirred tank reactor (CSTR). Simulation results show that SVM with quadratic polynomial kernel function based predictive controller can be well applied to nonlinear systems, with good performance in following reference trajectory as well as in disturbance-rejection.
Naumis, Gerardo G
2012-06-01
When a liquid melt is cooled, a glass or phase transition can be obtained depending on the cooling rate. Yet, this behavior has not been clearly captured in energy-landscape models. Here, a model is provided in which two key ingredients are considered in the landscape, metastable states and their multiplicity. Metastable states are considered as in two level system models. However, their multiplicity and topology allows a phase transition in the thermodynamic limit for slow cooling, while a transition to the glass is obtained for fast cooling. By solving the corresponding master equation, the minimal speed of cooling required to produce the glass is obtained as a function of the distribution of metastable states.
Reexamination of the State of the Art Cloud Modeling Shows Real Improvements
Muehlbauer, Andreas D.; Grabowski, Wojciech W.; Malinowski, S. P.; Ackerman, Thomas P.; Bryan, George; Lebo, Zachary; Milbrandt, Jason; Morrison, H.; Ovchinnikov, Mikhail; Tessendorf, Sarah; Theriault, Julie M.; Thompson, Gregory
2013-05-25
Following up on an almost thirty year long history of International Cloud Modeling Workshops, that started out with a meeting in Irsee, Germany in 1985, the 8th International Cloud Modeling Workshop was held in July 2012 in Warsaw, Poland. The workshop, hosted by the Institute of Geophysics at the University of Warsaw, was organized by Szymon Malinowski and his local team of students and co-chaired by Wojciech Grabowski (NCAR/MMM) and Andreas Muhlbauer (University of Washington). International Cloud Modeling Workshops have been held traditionally every four years typically during the week before the International Conference on Clouds and Precipitation (ICCP) . Rooted in the World Meteorological Organization’s (WMO) weather modification program, the core objectives of the Cloud Modeling Workshop have been centered at the numerical modeling of clouds, cloud microphysics, and the interactions between cloud microphysics and cloud dynamics. In particular, the goal of the workshop is to provide insight into the pertinent problems of today’s state-of-the-art of cloud modeling and to identify key deficiencies in the microphysical representation of clouds in numerical models and cloud parameterizations. In recent years, the workshop has increasingly shifted the focus toward modeling the interactions between aerosols and clouds and provided case studies to investigate both the effects of aerosols on clouds and precipitation as well as the impact of cloud and precipitation processes on aerosols. This time, about 60 (?) scientists from about 10 (?) different countries participated in the workshop and contributed with discussions, oral and poster presentations to the workshop’s plenary and breakout sessions. Several case leaders contributed to the workshop by setting up five observationally-based case studies covering a wide range of cloud types, namely, marine stratocumulus, mid-latitude squall lines, mid-latitude cirrus clouds, Arctic stratus and winter-time orographic
A Predictive Model of Cell Traction Forces Based on Cell Geometry
Lemmon, Christopher A.; Romer, Lewis H
2010-01-01
Recent work has indicated that the shape and size of a cell can influence how a cell spreads, develops focal adhesions, and exerts forces on the substrate. However, it is unclear how cell shape regulates these events. Here we present a computational model that uses cell shape to predict the magnitude and direction of forces generated by cells. The predicted results are compared to experimentally measured traction forces, and show that the model can predict traction force direction, relative m...
Intelligent predictive model of ventilating capacity of imperial smelt furnace
唐朝晖; 胡燕瑜; 桂卫华; 吴敏
2003-01-01
In order to know the ventilating capacity of imperial smelt furnace (ISF), and increase the output of plumbum, an intelligent modeling method based on gray theory and artificial neural networks(ANN) is proposed, in which the weight values in the integrated model can be adjusted automatically. An intelligent predictive model of the ventilating capacity of the ISF is established and analyzed by the method. The simulation results and industrial applications demonstrate that the predictive model is close to the real plant, the relative predictive error is 0.72%, which is 50% less than the single model, leading to a notable increase of the output of plumbum.
A Prediction Model of the Capillary Pressure J-Function
Xu, W. S.; Luo, P. Y.; Sun, L.; Lin, N.
2016-01-01
The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701
Adaptation of Predictive Models to PDA Hand-Held Devices
Lin, Edward J
2008-01-01
Full Text Available Prediction models using multiple logistic regression are appearing with increasing frequency in the medical literature. Problems associated with these models include the complexity of computations when applied in their pure form, and lack of availability at the bedside. Personal digital assistant (PDA hand-held devices equipped with spreadsheet software offer the clinician a readily available and easily applied means of applying predictive models at the bedside. The purposes of this article are to briefly review regression as a means of creating predictive models and to describe a method of choosing and adapting logistic regression models to emergency department (ED clinical practice.
A model to predict the power output from wind farms
Landberg, L. [Riso National Lab., Roskilde (Denmark)
1997-12-31
This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.
Modelling microbial interactions and food structure in predictive microbiology
Malakar, P.K.
2002-01-01
Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.
Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of
Modelling microbial interactions and food structure in predictive microbiology
Malakar, P.K.
2002-01-01
Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology. Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of new technologies
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
Nørskov, Natalja; Hedemann, Mette Skou; Theil, Peter Kappel
2013-01-01
The concentration and absorption of the nine phenolic acids of wheat were measured in a model experiment with catheterized pigs fed whole grain wheat and wheat aleurone diets. Six pigs in a repeated crossover design were fitted with catheters in the portal vein and mesenteric artery to study the ...
Machine learning models in breast cancer survival prediction.
Montazeri, Mitra; Montazeri, Mohadeseh; Montazeri, Mahdieh; Beigzadeh, Amin
2016-01-01
Breast cancer is one of the most common cancers with a high mortality rate among women. With the early diagnosis of breast cancer survival will increase from 56% to more than 86%. Therefore, an accurate and reliable system is necessary for the early diagnosis of this cancer. The proposed model is the combination of rules and different machine learning techniques. Machine learning models can help physicians to reduce the number of false decisions. They try to exploit patterns and relationships among a large number of cases and predict the outcome of a disease using historical cases stored in datasets. The objective of this study is to propose a rule-based classification method with machine learning techniques for the prediction of different types of Breast cancer survival. We use a dataset with eight attributes that include the records of 900 patients in which 876 patients (97.3%) and 24 (2.7%) patients were females and males respectively. Naive Bayes (NB), Trees Random Forest (TRF), 1-Nearest Neighbor (1NN), AdaBoost (AD), Support Vector Machine (SVM), RBF Network (RBFN), and Multilayer Perceptron (MLP) machine learning techniques with 10-cross fold technique were used with the proposed model for the prediction of breast cancer survival. The performance of machine learning techniques were evaluated with accuracy, precision, sensitivity, specificity, and area under ROC curve. Out of 900 patients, 803 patients and 97 patients were alive and dead, respectively. In this study, Trees Random Forest (TRF) technique showed better results in comparison to other techniques (NB, 1NN, AD, SVM and RBFN, MLP). The accuracy, sensitivity and the area under ROC curve of TRF are 96%, 96%, 93%, respectively. However, 1NN machine learning technique provided poor performance (accuracy 91%, sensitivity 91% and area under ROC curve 78%). This study demonstrates that Trees Random Forest model (TRF) which is a rule-based classification model was the best model with the highest level of
ZHANG Hua; WANG Yun-jia; LI Yong-feng
2009-01-01
A new mathematical model to estimate the parameters of the probability-integral method for mining subsidence prediction is proposed. Based on least squares support vector machine (LS-SVM) theory, it is capable of improving the precision and reliability of mining subsidence prediction. Many of the geological and mining factors involved are related in a nonlinear way. The new model is based on statistical theory (SLT) and empirical risk minimization (ERM) principles. Typical data collected from observation stations were used for the learning and training samples. The calculated results from the LS-SVM model were compared with the prediction results of a back propagation neural network (BPNN) model. The results show that the parameters were more precisely predicted by the LS-SVM model than by the BPNN model. The LS-SVM model was faster in computation and had better generalized performance. It provides a highly effective method for calculating the predicting parameters of the probability-integral method.
Modeling and prediction of surgical procedure times
P.S. Stepaniak (Pieter); C. Heij (Christiaan); G. de Vries (Guus)
2009-01-01
textabstractAccurate prediction of medical operation times is of crucial importance for cost efficient operation room planning in hospitals. This paper investigates the possible dependence of procedure times on surgeon factors like age, experience, gender, and team composition. The effect of these f
Active diagnosis of hybrid systems - A model predictive approach
2009-01-01
A method for active diagnosis of hybrid systems is proposed. The main idea is to predict the future output of both normal and faulty model of the system; then at each time step an optimization problem is solved with the objective of maximizing the difference between the predicted normal and faulty outputs constrained by tolerable performance requirements. As in standard model predictive control, the first element of the optimal input is applied to the system and the whole procedure is repeate...
Evaluation of Fast-Time Wake Vortex Prediction Models
Proctor, Fred H.; Hamilton, David W.
2009-01-01
Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.
Comparison of Simple Versus Performance-Based Fall Prediction Models
Shekhar K. Gadkaree BS
2015-05-01
Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “any fall” and “recurrent falls.” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.
Testing and analysis of internal hardwood log defect prediction models
R. Edward. Thomas
2011-01-01
The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...
Comparison of Simple Versus Performance-Based Fall Prediction Models
Shekhar K. Gadkaree BS
2015-05-01
Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Refining the committee approach and uncertainty prediction in hydrological modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Refining the committee approach and uncertainty prediction in hydrological modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Adding propensity scores to pure prediction models fails to improve predictive performance
Amy S. Nowacki
2013-08-01
Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.
A new Cumulative Damage Model for Fatigue Life Prediction under Shot Peening Treatment
Abdul-Jabar H. Ali
2015-07-01
Full Text Available In this paper, fatigue damage accumulation were studied using many methods i.e.Corton-Dalon (CD,Corton-Dalon-Marsh(CDM, new non-linear model and experimental method. The prediction of fatigue lifetimes based on the two classical methods, Corton-Dalon (CDandCorton-Dalon-Marsh (CDM, are uneconomic and non-conservative respectively. However satisfactory predictions were obtained by applying the proposed non-linear model (present model for medium carbon steel compared with experimental work. Many shortcomings of the two classical methods are related to their inability to take into account the surface treatment effect as shot peening. It is clear that the new model shows that a much better and conservative prediction of fatigue life in comparison with CD and CDM methods. The prediction of the present model gave slightly below the experimental data while the CDM gave overestimate prediction and CD showed strongly underestimates the life of specimens.
Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.
Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko
2016-03-01
In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.
Impact of modellers' decisions on hydrological a priori predictions
Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.
2014-06-01
In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of
Animal Models for Muscular Dystrophy Show Different Patterns of Sarcolemmal Disruption
1997-01-01
Genetic defects in a number of components of the dystrophin–glycoprotein complex (DGC) lead to distinct forms of muscular dystrophy. However, little is known about how alterations in the DGC are manifested in the pathophysiology present in dystrophic muscle tissue. One hypothesis is that the DGC protects the sarcolemma from contraction-induced damage. Using tracer molecules, we compared sarcolemmal integrity in animal models for muscular dystrophy and in muscular dystrophy patient samples. Ev...
A Model Lesson: Finland Shows Us What Equal Opportunity Looks Like
Sahlberg, Pasi
2012-01-01
International indicators show that Finland has one of the most educated citizenries in the world, provides educational opportunities in an egalitarian manner, and makes efficient use of resources. But at the beginning of the 1990s, education in Finland was nothing special in international terms. The performance of Finnish students on international…
Nash, Evelyn E.; Peters, Brian M.; Lilly, Elizabeth A.; Noverr, Mairi C.; Fidel, Paul L.
2016-01-01
Candida glabrata is the second most common organism isolated from women with vulvovaginal candidiasis (VVC), particularly in women with uncontrolled diabetes mellitus. However, mechanisms involved in the pathogenesis of C. glabrata-associated VVC are unknown and have not been studied at any depth in animal models. The objective of this study was to evaluate host responses to infection following efforts to optimize a murine model of C. glabrata VVC. For this, various designs were evaluated for consistent experimental vaginal colonization (i.e., type 1 and type 2 diabetic mice, exogenous estrogen, varying inocula, and co-infection with C. albicans). Upon model optimization, vaginal fungal burden and polymorphonuclear neutrophil (PMN) recruitment were assessed longitudinally over 21 days post-inoculation, together with vaginal concentrations of IL-1β, S100A8 alarmin, lactate dehydrogenase (LDH), and in vivo biofilm formation. Consistent and sustained vaginal colonization with C. glabrata was achieved in estrogenized streptozotocin-induced type 1 diabetic mice. Vaginal PMN infiltration was consistently low, with IL-1β, S100A8, and LDH concentrations similar to uninoculated mice. Biofilm formation was not detected in vivo, and co-infection with C. albicans did not induce synergistic immunopathogenic effects. This data suggests that experimental vaginal colonization of C. glabrata is not associated with an inflammatory immunopathogenic response or biofilm formation. PMID:26807975
Hussey, Peter S; Ridgely, M Susan; Rosenthal, Meredith B
2011-11-01
Fee-for-service payment is blamed for many of the problems observed in the US health care system. One of the leading alternative payment models proposed in the Affordable Care Act of 2010 is bundled payment, which provides payment for all of the care a patient needs over the course of a defined clinical episode, instead of paying for each discrete service. We evaluated the initial "road test" of PROMETHEUS Payment, one of several bundled payment pilot projects. The project has faced substantial implementation challenges, and none of the three pilot sites had executed contracts or made bundled payments as of May 2011. The pilots have taken longer to set up than expected, primarily because of the complexity of the payment model and the fact that it builds on the existing fee-for-service payment system and other complexities of health care. Participants continue to see promise and value in the bundled payment model, but the pilot results suggest that the desired benefits of this and other payment reforms may take time and considerable effort to materialize.
Evelyn E Nash
Full Text Available Candida glabrata is the second most common organism isolated from women with vulvovaginal candidiasis (VVC, particularly in women with uncontrolled diabetes mellitus. However, mechanisms involved in the pathogenesis of C. glabrata-associated VVC are unknown and have not been studied at any depth in animal models. The objective of this study was to evaluate host responses to infection following efforts to optimize a murine model of C. glabrata VVC. For this, various designs were evaluated for consistent experimental vaginal colonization (i.e., type 1 and type 2 diabetic mice, exogenous estrogen, varying inocula, and co-infection with C. albicans. Upon model optimization, vaginal fungal burden and polymorphonuclear neutrophil (PMN recruitment were assessed longitudinally over 21 days post-inoculation, together with vaginal concentrations of IL-1β, S100A8 alarmin, lactate dehydrogenase (LDH, and in vivo biofilm formation. Consistent and sustained vaginal colonization with C. glabrata was achieved in estrogenized streptozotocin-induced type 1 diabetic mice. Vaginal PMN infiltration was consistently low, with IL-1β, S100A8, and LDH concentrations similar to uninoculated mice. Biofilm formation was not detected in vivo, and co-infection with C. albicans did not induce synergistic immunopathogenic effects. This data suggests that experimental vaginal colonization of C. glabrata is not associated with an inflammatory immunopathogenic response or biofilm formation.
PEEX Modelling Platform for Seamless Environmental Prediction
Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku
2017-04-01
The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.
Integrated hydro-bacterial modelling for predicting bathing water quality
Huang, Guoxian; Falconer, Roger A.; Lin, Binliang
2017-03-01
In recent years health risks associated with the non-compliance of bathing water quality have received increasing worldwide attention. However, it is particularly challenging to establish the source of any non-compliance, due to the complex nature of the source of faecal indicator organisms, and the fate and delivery processes and scarcity of field measured data in many catchments and estuaries. In the current study an integrated hydro-bacterial model, linking a catchment, 1-D model and 2-D model were integrated to simulate the adsorption-desorption processes of faecal bacteria to and from sediment particles in river, estuarine and coastal waters, respectively. The model was then validated using hydrodynamic, sediment and faecal bacteria concentration data, measured in 2012, in the Ribble river and estuary, and along the Fylde coast, UK. Particular emphasis has been placed on the mechanism of faecal bacteria transport and decay through the deposition and resuspension of suspended sediments. The results showed that by coupling the E.coli concentration with the sediment transport processes, the accuracy of the predicted E.coli levels was improved. A series of scenario runs were then carried out to investigate the impacts of different management scenarios on the E.coli concentration levels in the coastal bathing water sites around Liverpool Bay, UK. The model results show that the level of compliance with the new EU bathing water standards can be improved significantly by extending outfalls and/or reducing urban sources by typically 50%.
Model Predictive Controller Design for the Dynamic Positioning System of a Semi-submersible Platform
Hongli Chen; Lei Wan; Fang Wang; Guocheng Zhang
2012-01-01
This paper researches how to apply the advanced control technology of model predictive control (MPC) to the design of the dynamic positioning system (DPS) of a semi-submersible platform.First,a linear low-frequency motion model with three degrees of freedom was established in the context of a semi-submersible platform.Second,a model predictive controller was designed based on a model which took the constraints of the system into account.Third,simulation was carried out to demonstrate the feasibility of the controller.The results show that the model predictive controller has good performance and good at dealing with the constraints of the system.
Zhu Hongqiu; Yang Chunhua; Gui Weihua
2009-01-01
To effectively predict the permeability index of smelting process in the imperial smelting furnace, an intelligent prediction model is proposed. It integrates the case-based reasoning (CBR) with adaptive particle swarm optimization (PSO). The number of nearest neighbors and the weighted features vector are optimized online using the adaptive PSO to improve the prediction accuracy of CBR. The adaptive inertia weight and mutation operation are used to overcome the premature convergence of the PSO. The proposed method is validated a compared with the basic weighted CBR. The results show that the proposed model has higher prediction accuracy and better performance than the basic CBR model.
Experimental and Modeling Studies on the Prediction of Gas Hydrate Formation
Jian-Yi Liu
2015-01-01
Full Text Available On the base of some kinetics model analysis and kinetic observation of hydrate formation process, a new prediction model of gas hydrate formation is proposed. The analysis of the present model shows that the formation of gas hydrate not only relevant with gas composition and free water content but also relevant with temperature and pressure. Through contrast experiment, the predicted result of the new prediction method of gas hydrate crystallization kinetics is close to measured result, it means that the prediction method can reflect the hydrate crystallization accurately.
Thorndahl, Søren; Nielsen, Jesper Ellerbæk; Jensen, David Getreuer
2016-12-01
Flooding produced by high-intensive local rainfall and drainage system capacity exceedance can have severe impacts in cities. In order to prepare cities for these types of flood events - especially in the future climate - it is valuable to be able to simulate these events numerically, both historically and in real-time. There is a rather untested potential in real-time prediction of urban floods. In this paper, radar data observations with different spatial and temporal resolution, radar nowcasts of 0-2 h leadtime, and numerical weather models with leadtimes up to 24 h are used as inputs to an integrated flood and drainage systems model in order to investigate the relative difference between different inputs in predicting future floods. The system is tested on the small town of Lystrup in Denmark, which was flooded in 2012 and 2014. Results show it is possible to generate detailed flood maps in real-time with high resolution radar rainfall data, but rather limited forecast performance in predicting floods with leadtimes more than half an hour.
Nonlinear model identification and adaptive model predictive control using neural networks.
Akpan, Vincent A; Hassapis, George D
2011-04-01
This paper presents two new adaptive model predictive control algorithms, both consisting of an on-line process identification part and a predictive control part. Both parts are executed at each sampling instant. The predictive control part of the first algorithm is the Nonlinear Model Predictive Control strategy and the control part of the second algorithm is the Generalized Predictive Control strategy. In the identification parts of both algorithms the process model is approximated by a series-parallel neural network structure which is trained by a recursive least squares (ARLS) method. The two control algorithms have been applied to: 1) the temperature control of a fluidized bed furnace reactor (FBFR) of a pilot plant and 2) the auto-pilot control of an F-16 aircraft. The training and validation data of the neural network are obtained from the open-loop simulation of the FBFR and the nonlinear F-16 aircraft models. The identification and control simulation results show that the first algorithm outperforms the second one at the expense of extra computation time.
Zhang, Yongqiang; Vaze, Jai; Chiew, Francis H. S.; Teng, Jin; Li, Ming
2014-09-01
Understanding a catchment's behaviours in terms of its underlying hydrological signatures is a fundamental task in surface water hydrology. It can help in water resource management, catchment classification, and prediction of runoff time series. This study investigated three approaches for predicting six hydrological signatures in southeastern Australia. These approaches were (1) spatial interpolation with three weighting schemes, (2) index model that estimates hydrological signatures using catchment characteristics, and (3) classical rainfall-runoff modelling. The six hydrological signatures fell into two categories: (1) long-term aggregated signatures - annual runoff coefficient, mean of log-transformed daily runoff, and zero flow ratio, and (2) signatures obtained from daily flow metrics - concavity index, seasonality ratio of runoff, and standard deviation of log-transformed daily flow. A total of 228 unregulated catchments were selected, with half the catchments randomly selected as gauged (or donors) for model building and the rest considered as ungauged (or receivers) to evaluate performance of the three approaches. The results showed that for two long-term aggregated signatures - the log-transformed daily runoff and runoff coefficient, the index model and rainfall-runoff modelling performed similarly, and were better than the spatial interpolation methods. For the zero flow ratio, the index model was best and the rainfall-runoff modelling performed worst. The other three signatures, derived from daily flow metrics and considered to be salient flow characteristics, were best predicted by the spatial interpolation methods of inverse distance weighting (IDW) and kriging. Comparison of flow duration curves predicted by the three approaches showed that the IDW method was best. The results found here provide guidelines for choosing the most appropriate approach for predicting hydrological behaviours at large scales.
Models Predicting Success of Infertility Treatment: A Systematic Review
Zarinara, Alireza; Zeraati, Hojjat; Kamali, Koorosh; Mohammad, Kazem; Shahnazari, Parisa; Akhondi, Mohammad Mehdi
2016-01-01
Background: Infertile couples are faced with problems that affect their marital life. Infertility treatment is expensive and time consuming and occasionally isn’t simply possible. Prediction models for infertility treatment have been proposed and prediction of treatment success is a new field in infertility treatment. Because prediction of treatment success is a new need for infertile couples, this paper reviewed previous studies for catching a general concept in applicability of the models. Methods: This study was conducted as a systematic review at Avicenna Research Institute in 2015. Six data bases were searched based on WHO definitions and MESH key words. Papers about prediction models in infertility were evaluated. Results: Eighty one papers were eligible for the study. Papers covered years after 1986 and studies were designed retrospectively and prospectively. IVF prediction models have more shares in papers. Most common predictors were age, duration of infertility, ovarian and tubal problems. Conclusion: Prediction model can be clinically applied if the model can be statistically evaluated and has a good validation for treatment success. To achieve better results, the physician and the couples’ needs estimation for treatment success rate were based on history, the examination and clinical tests. Models must be checked for theoretical approach and appropriate validation. The privileges for applying the prediction models are the decrease in the cost and time, avoiding painful treatment of patients, assessment of treatment approach for physicians and decision making for health managers. The selection of the approach for designing and using these models is inevitable. PMID:27141461
The regional prediction model of PM10 concentrations for Turkey
Güler, Nevin; Güneri İşçi, Öznur
2016-11-01
This study is aimed to predict a regional model for weekly PM10 concentrations measured air pollution monitoring stations in Turkey. There are seven geographical regions in Turkey and numerous monitoring stations at each region. Predicting a model conventionally for each monitoring station requires a lot of labor and time and it may lead to degradation in quality of prediction when the number of measurements obtained from any õmonitoring station is small. Besides, prediction models obtained by this way only reflect the air pollutant behavior of a small area. This study uses Fuzzy C-Auto Regressive Model (FCARM) in order to find a prediction model to be reflected the regional behavior of weekly PM10 concentrations. The superiority of FCARM is to have the ability of considering simultaneously PM10 concentrations measured monitoring stations in the specified region. Besides, it also works even if the number of measurements obtained from the monitoring stations is different or small. In order to evaluate the performance of FCARM, FCARM is executed for all regions in Turkey and prediction results are compared to statistical Autoregressive (AR) Models predicted for each station separately. According to Mean Absolute Percentage Error (MAPE) criteria, it is observed that FCARM provides the better predictions with a less number of models.
Turn-Taking Model in the Chinese Recruitment Reality show-BelongtoYou
AI Fan-qing
2014-01-01
Based on the theories of conversational analysis proposed by Sacks et al,this paper chooses excerpts of candidates’inter-view from the Chinese recruitment reality TV show BelongtoYou in Tianjin TV. Through analyzing the excerpt, how the rules of turn-taking are applied in this program will be demonstrated. And the features of turn-taking strategies used by the host,candi-dates and bosses will be concluded.
Global thermal niche models of two European grasses show high invasion risks in Antarctica.
Pertierra, Luis R; Aragón, Pedro; Shaw, Justine D; Bergstrom, Dana M; Terauds, Aleks; Olalla-Tárraga, Miguel Ángel
2016-12-14
The two non-native grasses that have established long-term populations in Antarctica (Poa pratensis and Poa annua) were studied from a global multidimensional thermal niche perspective to address the biological invasion risk to Antarctica. These two species exhibit contrasting introduction histories and reproductive strategies and represent two referential case studies of biological invasion processes. We used a multistep process with a range of species distribution modelling techniques (ecological niche factor analysis, multidimensional envelopes, distance/entropy algorithms) together with a suite of thermoclimatic variables, to characterize the potential ranges of these species. Their native bioclimatic thermal envelopes in Eurasia, together with the different naturalized populations across continents, were compared next. The potential niche of P. pratensis was wider at the cold extremes; however, P. annua life history attributes enable it to be a more successful colonizer. We observe that particularly cold summers are a key aspect of the unique Antarctic environment. In consequence, ruderals such as P. annua can quickly expand under such harsh conditions, whereas the more stress-tolerant P. pratensis endures and persist through steady growth. Compiled data on human pressure at the Antarctic Peninsula allowed us to provide site-specific biosecurity risk indicators. We conclude that several areas across the region are vulnerable to invasions from these and other similar species. This can only be visualized in species distribution models (SDMs) when accounting for founder populations that reveal nonanalogous conditions. Results reinforce the need for strict management practices to minimize introductions. Furthermore, our novel set of temperature-based bioclimatic GIS layers for ice-free terrestrial Antarctica provide a mechanism for regional and global species distribution models to be built for other potentially invasive species.
Gaussian mixture models as flux prediction method for central receivers
Grobler, Annemarie; Gauché, Paul; Smit, Willie
2016-05-01
Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.
Warren, Jessica; Owen, A Rhys; Glanvill, Amy; Francis, Asher; Maboni, Grazieli; Nova, Rodrigo J; Wapenaar, Wendela; Rees, Catherine; Tötemeyer, Sabine
2015-08-31
Listerial keratoconjunctivitis ('silage eye') is a wide spread problem in ruminants causing economic losses to farmers and impacts negatively on animal welfare. It results from direct entry of Listeria monocytogenes into the eye, often following consumption of contaminated silage. An isolation protocol for bovine conjunctival swabbing was developed and used to sample both infected and healthy eyes bovine eyes (n=46). L. monocytogenes was only isolated from one healthy eye sample, and suggests that this organism can be present without causing disease. To initiate a study of this disease, an infection model was developed using isolated conjunctiva explants obtained from cattle eyes post slaughter. Conjunctiva were cultured and infected for 20 h with a range of L. monocytogenes isolates (n=11), including the healthy bovine eye isolate and also strains isolated from other bovine sources, such as milk or clinical infections. Two L. monocytogenes isolates (one from a healthy eye and one from a cattle abortion) were markedly less able to invade conjunctiva explants, but one of those was able to efficiently infect Caco2 cells indicating that it was fully virulent. These two isolates were also significantly more sensitive to lysozyme compared to most other isolates tested, suggesting that lysozyme resistance is an important factor when infecting bovine conjunctiva. In conclusion, we present the first bovine conjunctiva explant model for infection studies and demonstrate that clinical L. monocytogenes isolates from cases of bovine keratoconjunctivitis are able to infect these tissues.
Rastafa I Geddes
Full Text Available PURPOSE: Controlled cortical impact (CCI models in adult and aged Sprague-Dawley (SD rats have been used extensively to study medial prefrontal cortex (mPFC injury and the effects of post-injury progesterone treatment, but the hormone's effects after traumatic brain injury (TBI in juvenile animals have not been determined. In the present proof-of-concept study we investigated whether progesterone had neuroprotective effects in a pediatric model of moderate to severe bilateral brain injury. METHODS: Twenty-eight-day old (PND 28 male Sprague Dawley rats received sham (n = 24 or CCI (n = 47 injury and were given progesterone (4, 8, or 16 mg/kg per 100 g body weight or vehicle injections on post-injury days (PID 1-7, subjected to behavioral testing from PID 9-27, and analyzed for lesion size at PID 28. RESULTS: The 8 and 16 mg/kg doses of progesterone were observed to be most beneficial in reducing the effect of CCI on lesion size and behavior in PND 28 male SD rats. CONCLUSION: Our findings suggest that a midline CCI injury to the frontal cortex will reliably produce a moderate TBI comparable to what is seen in the adult male rat and that progesterone can ameliorate the injury-induced deficits.
REALIGNED MODEL PREDICTIVE CONTROL OF A PROPYLENE DISTILLATION COLUMN
A. I. Hinojosa
Full Text Available Abstract In the process industry, advanced controllers usually aim at an economic objective, which usually requires closed-loop stability and constraints satisfaction. In this paper, the application of a MPC in the optimization structure of an industrial Propylene/Propane (PP splitter is tested with a controller based on a state space model, which is suitable for heavily disturbed environments. The simulation platform is based on the integration of the commercial dynamic simulator Dynsim® and the rigorous steady-state optimizer ROMeo® with the real-time facilities of Matlab. The predictive controller is the Infinite Horizon Model Predictive Control (IHMPC, based on a state-space model that that does not require the use of a state observer because the non-minimum state is built with the past inputs and outputs. The controller considers the existence of zone control of the outputs and optimizing targets for the inputs. We verify that the controller is efficient to control the propylene distillation system in a disturbed scenario when compared with a conventional controller based on a state observer. The simulation results show a good performance in terms of stability of the controller and rejection of large disturbances in the composition of the feed of the propylene distillation column.
A model SN2 reaction ‘on water’ does not show rate enhancement
Nelson, Katherine V.; Benjamin, Ilan
2011-05-01
Molecular dynamics calculations of the benchmark nucleophilic substitution reaction (SN2) Cl- + CH3Cl are carried out at the water liquid/vapor interface. The reaction free energy profile and the activation free energy are determined as a function of the reactants' location normal to the surface. The activation free energy remains almost constant relative to that in bulk water, despite the fact that the barrier is expected to significantly decrease as the reaction is carried out near the vapor phase. We show that this is due to the combined effects of a clustering of water molecules around the nucleophile and a relatively weak hydration of the transition state.
Bourgeault, Adeline, E-mail: bourgeault@ensil.unilim.fr [Cemagref, Unite de Recherche Hydrosystemes et Bioprocedes, 1 rue Pierre-Gilles de Gennes, 92761 Antony (France); FIRE, FR-3020, 4 place Jussieu, 75005 Paris (France); Gourlay-France, Catherine, E-mail: catherine.gourlay@cemagref.fr [Cemagref, Unite de Recherche Hydrosystemes et Bioprocedes, 1 rue Pierre-Gilles de Gennes, 92761 Antony (France); FIRE, FR-3020, 4 place Jussieu, 75005 Paris (France); Priadi, Cindy, E-mail: cindy.priadi@eng.ui.ac.id [LSCE/IPSL CEA-CNRS-UVSQ, Avenue de la Terrasse, 91198 Gif-sur-Yvette (France); Ayrault, Sophie, E-mail: Sophie.Ayrault@lsce.ipsl.fr [LSCE/IPSL CEA-CNRS-UVSQ, Avenue de la Terrasse, 91198 Gif-sur-Yvette (France); Tusseau-Vuillemin, Marie-Helene, E-mail: Marie-helene.tusseau@ifremer.fr [IFREMER Technopolis 40, 155 rue Jean-Jacques Rousseau, 92138 Issy-Les-Moulineaux (France)
2011-12-15
This study investigates the ability of the biodynamic model to predict the trophic bioaccumulation of cadmium (Cd), chromium (Cr), copper (Cu), nickel (Ni) and zinc (Zn) in a freshwater bivalve. Zebra mussels were transplanted to three sites along the Seine River (France) and collected monthly for 11 months. Measurements of the metal body burdens in mussels were compared with the predictions from the biodynamic model. The exchangeable fraction of metal particles did not account for the bioavailability of particulate metals, since it did not capture the differences between sites. The assimilation efficiency (AE) parameter is necessary to take into account biotic factors influencing particulate metal bioavailability. The biodynamic model, applied with AEs from the literature, overestimated the measured concentrations in zebra mussels, the extent of overestimation being site-specific. Therefore, an original methodology was proposed for in situ AE measurements for each site and metal. - Highlights: > Exchangeable fraction of metal particles did not account for the bioavailability of particulate metals. > Need for site-specific biodynamic parameters. > Field-determined AE provide a good fit between the biodynamic model predictions and bioaccumulation measurements. - The interpretation of metal bioaccumulation in transplanted zebra mussels with biodynamic modelling highlights the need for site-specific assimilation efficiencies of particulate metals.
Nonlinear model predictive control of a packed distillation column
Patwardhan, A.A.; Edgar, T.F. (Univ. of Texas, Austin, TX (United States). Dept. of Chemical Engineering)
1993-10-01
A rigorous dynamic model based on fundamental chemical engineering principles was formulated for a packed distillation column separating a mixture of cyclohexane and n-heptane. This model was simplified to a form suitable for use in on-line model predictive control calculations. A packed distillation column was operated at several operating conditions to estimate two unknown model parameters in the rigorous and simplified models. The actual column response to step changes in the feed rate, distillate rate, and reboiler duty agreed well with dynamic model predictions. One unusual characteristic observed was that the packed column exhibited gain-sign changes, which are very difficult to treat using conventional linear feedback control. Nonlinear model predictive control was used to control the distillation column at an operating condition where the process gain changed sign. An on-line, nonlinear model-based scheme was used to estimate unknown/time-varying model parameters.
Model predictive control for Z-source power converter
Mo, W.; Loh, P.C.; Blaabjerg, Frede
2011-01-01
This paper presents Model Predictive Control (MPC) of impedance-source (commonly known as Z-source) power converter. Output voltage control and current control for Z-source inverter are analyzed and simulated. With MPC's ability of multi- system variables regulation, load current and voltage...... regulations, impedance network inductor current, capacitor voltage as well as switching frequency fixation, transient reservation and null state penalization are all regulated as subjecting to constraints of this control method. The quality of output waveform, stability of impedance-network, level constraint...... of variable switching frequency as well as robustness of transient response can be obtained at the same time with a formulated Z-source network model. Operating steady state and transient state simulation of MPC are going to be presented, which shows good reference tracking ability of this control method....
Prediction and setup of phytoplankton statistical model of Qiandaohu Lake
严力蛟; 全为民; 赵晓慧
2004-01-01
This research considers the mathematical relationship between concentration of Chla and seven environmental factors, i.e. Lake water temperature (T), Secci-depth (SD), pH, DO, CODMn, Total Nitrogen (TN), Total Phosphorus (TP).Stepwise linear regression of 1997 to 1999 monitoring data at each sampling point of Qiandaohu Lake yielded the multivariate regression models presented in this paper. The concentration of Chla as simulation for the year 2000 by the regression model was similar to the observed value. The suggested mathematical relationship could be used to predict changes in the lakewater environment at any point in time. The results showed that SD, TP and pH were the most significant factors affecting Chla concentration.
In vitro and in vivo models of Huntington's disease show alterations in the endocannabinoid system.
Bari, Monica; Battista, Natalia; Valenza, Marta; Mastrangelo, Nicolina; Malaponti, Marinella; Catanzaro, Giuseppina; Centonze, Diego; Finazzi-Agrò, Alessandro; Cattaneo, Elena; Maccarrone, Mauro
2013-07-01
In this study, we analyzed the components of the endocannabinoid system (ECS) in R6/2 mice, a widely used model of Huntington's disease (HD). We measured the endogenous content of N-arachidonoylethanolamine and 2-arachidonoylglycerol and the activity of their biosynthetic enzymes (N-acyl-phosphatidylethanolamine-hydrolyzing phospholipase D and diacylglycerol lipase, respectively) and hydrolytic enzymes [fatty acid amide hydrolase (FAAH) and monoacylglycerol lipase, respectively] and of their target receptors (type 1 cannabinoid receptor, type 2 cannabinoid receptor, and transient receptor potential vanilloid-1) in the brains of wild-type and R6/2 mice of different ages, as well as in the striatum and cortex of 12-week-old animals. In addition, we measured FAAH activity in lymphocytes of R6/2 mice. In the whole brains of 12-week-old R6/2 mice, we found reductions in N-acyl-phosphatidylethanolamine-hydrolyzing phospholipase D activity, diacylglycerol lipase activity and cannabinoid receptor binding, mostly associated with changes in the striatum but not in the cortex, as well as an increase in 2-arachidonoylglycerol content as compared with wild-type littermates, without any other change in ECS elements. Then, our analysis was extended to HD43 cells, an inducible cellular model of HD derived from rat ST14A cells. In both induced and noninduced conditions, we demonstrated a fully functional ECS. Overall, our data suggest that the ECS is differently affected in mouse and human HD, and that HD43 cells are suitable for high-throughput screening of FAAH-oriented drugs affecting HD progression.
Predictive functional control based on fuzzy T-S model for HVAC systems temperature control
Hongli L(U); Lei JIA; Shulan KONG; Zhaosheng ZHANG
2007-01-01
In heating,ventilating and air-conditioning(HVAC)systems,there exist severe nonlinearity,time-varying nature,disturbances and uncertainties.A new predictive functional control based on Takagi-Sugeno(T-S)fuzzy model was proposed to control HVAC systems.The T-S fuzzy model of stabilized controlled process was obtained using the least squares method,then on the basis of global linear predictive model from T-S fuzzy model,the process was controlled by the predictive functional controller.Especially the feedback regulation part was developed to compensate uncertainties of fuzzy predictive model.Finally simulation test results in HVAC systems control applications showed that the proposed fuzzy model predictive functional control improves tracking effect and robustness.Compared with the conventional PID controller,this control strategy has the advantages of less overshoot and shorter setting time,etc.
Application of Nonlinear Predictive Control Based on RBF Network Predictive Model in MCFC Plant
CHEN Yue-hua; CAO Guang-yi; ZHU Xin-jian
2007-01-01
This paper described a nonlinear model predictive controller for regulating a molten carbonate fuel cell (MCFC). A detailed mechanism model of output voltage of a MCFC was presented at first. However, this model was too complicated to be used in a control system. Consequently, an off line radial basis function (RBF) network was introduced to build a nonlinear predictive model. And then, the optimal control sequences were obtained by applying golden mean method. The models and controller have been realized in the MATLAB environment. Simulation results indicate the proposed algorithm exhibits satisfying control effect even when the current densities vary largely.
A burnout prediction model based around char morphology
T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre
2005-07-01
Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.
Predicting growth conditions from internal metabolic fluxes in an in-silico model of E. coli.
Viswanadham Sridhara
Full Text Available A widely studied problem in systems biology is to predict bacterial phenotype from growth conditions, using mechanistic models such as flux balance analysis (FBA. However, the inverse prediction of growth conditions from phenotype is rarely considered. Here we develop a computational framework to carry out this inverse prediction on a computational model of bacterial metabolism. We use FBA to calculate bacterial phenotypes from growth conditions in E. coli, and then we assess how accurately we can predict the original growth conditions from the phenotypes. Prediction is carried out via regularized multinomial regression. Our analysis provides several important physiological and statistical insights. First, we show that by analyzing metabolic end products we can consistently predict growth conditions. Second, prediction is reliable even in the presence of small amounts of impurities. Third, flux through a relatively small number of reactions per growth source (∼10 is sufficient for accurate prediction. Fourth, combining the predictions from two separate models, one trained only on carbon sources and one only on nitrogen sources, performs better than models trained to perform joint prediction. Finally, that separate predictions perform better than a more sophisticated joint prediction scheme suggests that carbon and nitrogen utilization pathways, despite jointly affecting cellular growth, may be fairly decoupled in terms of their dependence on specific assortments of molecular precursors.
A zebrafish model of glucocorticoid resistance shows serotonergic modulation of the stress response
Brian eGriffiths
2012-10-01
Full Text Available One function of glucocorticoids is to restore homeostasis after an acute stress response by providing negative feedback to stress circuits in the brain. Loss of this negative feedback leads to elevated physiological stress and may contribute to depression, anxiety and post-traumatic stress disorder. We investigated the early, developmental effects of glucocorticoid signaling deficits on stress physiology and related behaviors using a mutant zebrafish, grs357, with non-functional glucocorticoid receptors. These mutants are morphologically inconspicuous and adult-viable. A previous study of adult grs357 mutants showed loss of glucocorticoid-mediated negative feedback and elevated physiological and behavioral stress markers. Already at five days post-fertilization, mutant larvae had elevated whole body cortisol, increased expression of pro-opiomelanocortin (POMC, the precursor of adrenocorticotropic hormone (ACTH, and failed to show normal suppression of stress markers after dexamethasone treatment. Mutant larvae had larger auditory-evoked startle responses compared to wildtype sibling controls (grwt, despite having lower spontaneous activity levels. Fluoxetine (Prozac treatment in mutants decreased startle responding and increased spontaneous activity, making them behaviorally similar to wildtype. This result mirrors known effects of selective serotonin reuptake inhibitors (SSRIs in modifying glucocorticoid signaling and alleviating stress disorders in human patients. Our results suggest that larval grs357 zebrafish can be used to study behavioral, physiological and molecular aspects of stress disorders. Most importantly, interactions between glucocorticoid and serotonin signaling appear to be highly conserved among vertebrates, suggesting deep homologies at the neural circuit level and opening up new avenues for research into psychiatric conditions.
The atherogenic Scarb1 null mouse model shows a high bone mass phenotype.
Martineau, Corine; Martin-Falstrault, Louise; Brissette, Louise; Moreau, Robert
2014-01-01
Scavenger receptor class B, type I (SR-BI), the Scarb1 gene product, is a receptor associated with cholesteryl ester uptake from high-density lipoproteins (HDL), which drives cholesterol movement from peripheral tissues toward the liver for excretion, and, consequently, Scarb1 null mice are prone to atherosclerosis. Because studies have linked atherosclerosis incidence with osteoporosis, we characterized the bone metabolism in these mice. Bone morphometry was assessed through microcomputed tomography and histology. Marrow stromal cells (MSCs) were used to characterize influence of endogenous SR-BI in cell functions. Total and HDL-associated cholesterol in null mice were increased by 32-60%, correlating with its role in lipoprotein metabolism. Distal metaphyses from 2- and 4-mo-old null mice showed correspondingly 46 and 37% higher bone volume fraction associated with a higher number of trabeculae. Histomorphometric analyses in 2-mo-old null male mice revealed 1.42-fold greater osteoblast surface, 1.37-fold higher percent mineralizing surface, and 1.69-fold enhanced bone formation rate. In vitro assays for MSCs from null mice revealed 37% higher proliferation rate, 48% more alkaline phosphatase activity, 70% greater mineralization potential and a 2-fold osterix (Sp7) expression, yet a 0.5-fold decrease in caveolin-1 (Cav1) expression. Selective uptake levels of HDL-associated cholesteryl oleate and estradiol were similar between MSC from wild-type and Scarb1 null mice, suggesting that its contribution to this process is not its main role in these cells. However, Scarb1 knockout stunted the HDL-dependent regulation of Cav1 genic expression. Scarb1 null mice are not prone to osteoporosis but show higher bone mass associated with enhanced bone formation.
Improving Environmental Model Calibration and Prediction
2011-01-18
groundwater model calibration. Adv. Water Resour., 29(4):605–623, 2006. [9] B.E. Skahill, J.S. Baggett, S. Frankenstein , and C.W. Downer. More efficient...of Hydrology, Environmental Modelling & Software, or Water Resources Research). Skahill, B., Baggett, J., Frankenstein , S., and Downer, C.W. (2009
Design and Implement an Novel File Access Prediction Model in Linux
LIU Xie; LIU Xin-song; YANG Feng; BAI Ying-jie
2004-01-01
So far, file access prediction models is mainly based on either the file access frequency or the historical record of the latest access. In this paper, a new file access prediction model called frequency- and recency-based successor (FRS) is presented which combines the advantages of the file frequency with the historical record. FRS model has the capability of rapid response to workload changes and can predict future events with greater accuracy than most of other prediction models. To evaluate the performance of FRS mode, the Linux kernel is modified to predict and prefetch upcoming accesses. The experiment shows that FRS can accurately predict approximately 80% of all file access events, while maintaining an immediate successor queue (ISQ) per-file which only requires regular dynamic updates.
Model Predictive Control for Smart Energy Systems
Halvgaard, Rasmus
load shifting capabilities of the units that adapts to the given price predictions. We furthermore evaluated control performance in terms of economic savings for different control strategies and forecasts. Chapter 5 describes and compares the proposed large-scale Aggregator control strategies....... Aggregators are assumed to play an important role in the future Smart Grid and coordinate a large portfolio of units. The developed economic MPC controllers interfaces each unit directly to an Aggregator. We developed several MPC-based aggregation strategies that coordinates the global behavior of a portfolio...
To predict the niche, model colonization and extinction
Yackulic, Charles B.; Nichols, James D.; Reid, Janice; Der, Ricky
2015-01-01
Ecologists frequently try to predict the future geographic distributions of species. Most studies assume that the current distribution of a species reflects its environmental requirements (i.e., the species' niche). However, the current distributions of many species are unlikely to be at equilibrium with the current distribution of environmental conditions, both because of ongoing invasions and because the distribution of suitable environmental conditions is always changing. This mismatch between the equilibrium assumptions inherent in many analyses and the disequilibrium conditions in the real world leads to inaccurate predictions of species' geographic distributions and suggests the need for theory and analytical tools that avoid equilibrium assumptions. Here, we develop a general theory of environmental associations during periods of transient dynamics. We show that time-invariant relationships between environmental conditions and rates of local colonization and extinction can produce substantial temporal variation in occupancy–environment relationships. We then estimate occupancy–environment relationships during three avian invasions. Changes in occupancy–environment relationships over time differ among species but are predicted by dynamic occupancy models. Since estimates of the occupancy–environment relationships themselves are frequently poor predictors of future occupancy patterns, research should increasingly focus on characterizing how rates of local colonization and extinction vary with environmental conditions.
Application of Model Predictive Control to BESS for Microgrid Control
Thai-Thanh Nguyen
2015-08-01
Full Text Available Battery energy storage systems (BESSs have been widely used for microgrid control. Generally, BESS control systems are based on proportional-integral (PI control techniques with the outer and inner control loops based on PI regulators. Recently, model predictive control (MPC has attracted attention for application to future energy processing and control systems because it can easily deal with multivariable cases, system constraints, and nonlinearities. This study considers the application of MPC-based BESSs to microgrid control. Two types of MPC are presented in this study: MPC based on predictive power control (PPC and MPC based on PI control in the outer and predictive current control (PCC in the inner control loops. In particular, the effective application of MPC for microgrids with multiple BESSs should be considered because of the differences in their control performance. In this study, microgrids with two BESSs based on two MPC techniques are considered as an example. The control performance of the MPC used for the control microgrid is compared to that of the PI control. The proposed control strategy is investigated through simulations using MATLAB/Simulink software. The simulation results show that the response time, power and voltage ripples, and frequency spectrum could be improved significantly by using MPC.
Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.
Moray, Neville; Groeger, John; Stanton, Neville
2017-02-01
This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.
A thermodynamic model to predict wax formation in petroleum fluids
Coutinho, J.A.P. [Universidade de Aveiro (Portugal). Dept. de Quimica. Centro de Investigacao em Quimica]. E-mail: jcoutinho@dq.ua.pt; Pauly, J.; Daridon, J.L. [Universite de Pau et des Pays de l' Adour, Pau (France). Lab. des Fluides Complexes
2001-12-01
Some years ago the authors proposed a model for the non-ideality of the solid phase, based on the Predictive Local Composition concept. This was first applied to the Wilson equation and latter extended to NRTL and UNIQUAC models. Predictive UNIQUAC proved to be extraordinarily successful in predicting the behaviour of both model and real hydrocarbon fluids at low temperatures. This work illustrates the ability of Predictive UNIQUAC in the description of the low temperature behaviour of petroleum fluids. It will be shown that using Predictive UNIQUAC in the description of the solid phase non-ideality a complete prediction of the low temperature behaviour of synthetic paraffin solutions, fuels and crude oils is achieved. The composition of both liquid and solid phases, the amount of crystals formed and the cloud points are predicted within the accuracy of the experimental data. The extension of Predictive UNIQUAC to high pressures, by coupling it with an EOS/G{sup E} model based on the SRK EOS used with the LCVM mixing rule, is proposed and predictions of phase envelopes for live oils are compared with experimental data. (author)
A THERMODYNAMIC MODEL TO PREDICT WAX FORMATION IN PETROLEUM FLUIDS
J.A.P. Coutinho
2001-12-01
Full Text Available Some years ago the authors proposed a model for the non-ideality of the solid phase, based on the Predictive Local Composition concept. This was first applied to the Wilson equation and latter extended to NRTL and UNIQUAC models. Predictive UNIQUAC proved to be extraordinarily successful in predicting the behaviour of both model and real hydrocarbon fluids at low temperatures. This work illustrates the ability of Predictive UNIQUAC in the description of the low temperature behaviour of petroleum fluids. It will be shown that using Predictive UNIQUAC in the description of the solid phase non-ideality a complete prediction of the low temperature behaviour of synthetic paraffin solutions, fuels and crude oils is achieved. The composition of both liquid and solid phases, the amount of crystals formed and the cloud points are predicted within the accuracy of the experimental data. The extension of Predictive UNIQUAC to high pressures, by coupling it with an EOS/G E model based on the SRK EOS used with the LCVM mixing rule, is proposed and predictions of phase envelopes for live oils are compared with experimental data.
Metabolic remodeling agents show beneficial effects in the dystrophin-deficient mdx mouse model
Jahnke Vanessa E
2012-08-01
Full Text Available Abstract Background Duchenne muscular dystrophy is a genetic disease involving a severe muscle wasting that is characterized by cycles of muscle degeneration/regeneration and culminates in early death in affected boys. Mitochondria are presumed to be involved in the regulation of myoblast proliferation/differentiation; enhancing mitochondrial activity with exercise mimetics (AMPK and PPAR-delta agonists increases muscle function and inhibits muscle wasting in healthy mice. We therefore asked whether metabolic remodeling agents that increase mitochondrial activity would improve muscle function in mdx mice. Methods Twelve-week-old mdx mice were treated with two different metabolic remodeling agents (GW501516 and AICAR, separately or in combination, for 4 weeks. Extensive systematic behavioral, functional, histological, biochemical, and molecular tests were conducted to assess the drug(s' effects. Results We found a gain in body and muscle weight in all treated mice. Histologic examination showed a decrease in muscle inflammation and in the number of fibers with central nuclei and an increase in fibers with peripheral nuclei, with significantly fewer activated satellite cells and regenerating fibers. Together with an inhibition of FoXO1 signaling, these results indicated that the treatments reduced ongoing muscle damage. Conclusions The three treatments produced significant improvements in disease phenotype, including an increase in overall behavioral activity and significant gains in forelimb and hind limb strength. Our findings suggest that triggering mitochondrial activity with exercise mimetics improves muscle function in dystrophin-deficient mdx mice.
Male Wistar rats show individual differences in an animal model of conformity.
Jolles, Jolle W; de Visser, Leonie; van den Bos, Ruud
2011-09-01
Conformity refers to the act of changing one's behaviour to match that of others. Recent studies in humans have shown that individual differences exist in conformity and that these differences are related to differences in neuronal activity. To understand the neuronal mechanisms in more detail, animal tests to assess conformity are needed. Here, we used a test of conformity in rats that has previously been evaluated in female, but not male, rats and assessed the nature of individual differences in conformity. Male Wistar rats were given the opportunity to learn that two diets differed in palatability. They were subsequently exposed to a demonstrator that had consumed the less palatable food. Thereafter, they were exposed to the same diets again. Just like female rats, male rats decreased their preference for the more palatable food after interaction with demonstrator rats that had eaten the less palatable food. Individual differences existed for this shift, which were only weakly related to an interaction between their own initial preference and the amount consumed by the demonstrator rat. The data show that this conformity test in rats is a promising tool to study the neurobiology of conformity.
A systematic review of predictive modeling for bronchiolitis.
Luo, Gang; Nkoy, Flory L; Gesteland, Per H; Glasgow, Tiffany S; Stone, Bryan L
2014-10-01
Bronchiolitis is the most common cause of illness leading to hospitalization in young children. At present, many bronchiolitis management decisions are made subjectively, leading to significant practice variation among hospitals and physicians caring for children with bronchiolitis. To standardize care for bronchiolitis, researchers have proposed various models to predict the disease course to help determine a proper management plan. This paper reviews the existing state of the art of predictive modeling for bronchiolitis. Predictive modeling for respiratory syncytial virus (RSV) infection is covered whenever appropriate, as RSV accounts for about 70% of bronchiolitis cases. A systematic review was conducted through a PubMed search up to April 25, 2014. The literature on predictive modeling for bronchiolitis was retrieved using a comprehensive search query, which was developed through an iterative process. Search results were limited to human subjects, the English language, and children (birth to 18 years). The literature search returned 2312 references in total. After manual review, 168 of these references were determined to be relevant and are discussed in this paper. We identify several limitations and open problems in predictive modeling for bronchiolitis, and provide some preliminary thoughts on how to address them, with the hope to stimulate future research in this domain. Many problems remain open in predictive modeling for bronchiolitis. Future studies will need to address them to achieve optimal predictive models. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Osman, Marisol; Vera, C. S.
2016-11-01
This work presents an assessment of the predictability and skill of climate anomalies over South America. The study was made considering a multi-model ensemble of seasonal forecasts for surface air temperature, precipitation and regional circulation, from coupled global circulation models included in the Climate Historical Forecast Project. Predictability was evaluated through the estimation of the signal-to-total variance ratio while prediction skill was assessed computing anomaly correlation coefficients. Both indicators present over the continent higher values at the tropics than at the extratropics for both, surface air temperature and precipitation. Moreover, predictability and prediction skill for temperature are slightly higher in DJF than in JJA while for precipitation they exhibit similar levels in both seasons. The largest values of predictability and skill for both variables and seasons are found over northwestern South America while modest but still significant values for extratropical precipitation at southeastern South America and the extratropical Andes. The predictability levels in ENSO years of both variables are slightly higher, although with the same spatial distribution, than that obtained considering all years. Nevertheless, predictability at the tropics for both variables and seasons diminishes in both warm and cold ENSO years respect to that in all years. The latter can be attributed to changes in signal rather than in the noise. Predictability and prediction skill for low-level winds and upper-level zonal winds over South America was also assessed. Maximum levels of predictability for low-level winds were found were maximum mean values are observed, i.e. the regions associated with the equatorial trade winds, the midlatitudes westerlies and the South American Low-Level Jet. Predictability maxima for upper-level zonal winds locate where the subtropical jet peaks. Seasonal changes in wind predictability are observed that seem to be related to
Predicting and Modelling of Survival Data when Cox's Regression Model does not hold
Scheike, Thomas H.; Zhang, Mei-Jie
2002-01-01
Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...
Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.
Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan
2016-11-01
In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects.
Predictive error analysis for a water resource management model
Gallagher, Mark; Doherty, John
2007-02-01
SummaryIn calibrating a model, a set of parameters is assigned to the model which will be employed for the making of all future predictions. If these parameters are estimated through solution of an inverse problem, formulated to be properly posed through either pre-calibration or mathematical regularisation, then solution of this inverse problem will, of necessity, lead to a simplified parameter set that omits the details of reality, while still fitting historical data acceptably well. Furthermore, estimates of parameters so obtained will be contaminated by measurement noise. Both of these phenomena will lead to errors in predictions made by the model, with the potential for error increasing with the hydraulic property detail on which the prediction depends. Integrity of model usage demands that model predictions be accompanied by some estimate of the possible errors associated with them. The present paper applies theory developed in a previous work to the analysis of predictive error associated with a real world, water resource management model. The analysis offers many challenges, including the fact that the model is a complex one that was partly calibrated by hand. Nevertheless, it is typical of models which are commonly employed as the basis for the making of important decisions, and for which such an analysis must be made. The potential errors associated with point-based and averaged water level and creek inflow predictions are examined, together with the dependence of these errors on the amount of averaging involved. Error variances associated with predictions made by the existing model are compared with "optimized error variances" that could have been obtained had calibration been undertaken in such a way as to minimize predictive error variance. The contributions by different parameter types to the overall error variance of selected predictions are also examined.
Models for short term malaria prediction in Sri Lanka
Galappaththy Gawrie NL
2008-05-01
Full Text Available Abstract Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed.
Aggregate driver model to enable predictable behaviour
Chowdhury, A.; Chakravarty, T.; Banerjee, T.; Balamuralidhar, P.
2015-09-01
The categorization of driving styles, particularly in terms of aggressiveness and skill is an emerging area of interest under the broader theme of intelligent transportation. There are two possible discriminatory techniques that can be applied for such categorization; a microscale (event based) model and a macro-scale (aggregate) model. It is believed that an aggregate model will reveal many interesting aspects of human-machine interaction; for example, we may be able to understand the propensities of individuals to carry out a given task over longer periods of time. A useful driver model may include the adaptive capability of the human driver, aggregated as the individual propensity to control speed/acceleration. Towards that objective, we carried out experiments by deploying smartphone based application to be used for data collection by a group of drivers. Data is primarily being collected from GPS measurements including position & speed on a second-by-second basis, for a number of trips over a two months period. Analysing the data set, aggregate models for individual drivers were created and their natural aggressiveness were deduced. In this paper, we present the initial results for 12 drivers. It is shown that the higher order moments of the acceleration profile is an important parameter and identifier of journey quality. It is also observed that the Kurtosis of the acceleration profiles stores major information about the driving styles. Such an observation leads to two different ranking systems based on acceleration data. Such driving behaviour models can be integrated with vehicle and road model and used to generate behavioural model for real traffic scenario.
Validating predictions from climate envelope models
Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.
2013-01-01
Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.
Noncausal spatial prediction filtering based on an ARMA model
Liu Zhipeng; Chen Xiaohong; Li Jingye
2009-01-01
Conventional f-x prediction filtering methods are based on an autoregressive model. The error section is first computed as a source noise but is removed as additive noise to obtain the signal, which results in an assumption inconsistency before and after filtering. In this paper, an autoregressive, moving-average model is employed to avoid the model inconsistency. Based on the ARMA model, a noncasual prediction filter is computed and a self-deconvolved projection filter is used for estimating additive noise in order to suppress random noise. The 1-D ARMA model is also extended to the 2-D spatial domain, which is the basis for noncasual spatial prediction filtering for random noise attenuation on 3-D seismic data. Synthetic and field data processing indicate this method can suppress random noise more effectively and preserve the signal simultaneously and does much better than other conventional prediction filtering methods.
Two Predictions of a Compound Cue Model of Priming
Walenski, Matthew
2003-01-01
This paper examines two predictions of the compound cue model of priming (Ratcliff and McKoon, 1988). While this model has been used to provide an account of a wide range of priming effects, it may not actually predict priming in these or other circumstances. In order to predict priming effects, the compound cue model relies on an assumption that all items have the same number of associates. This assumption may be true in only a restricted number of cases. This paper demonstrates that when th...
Evolutionary modeling and prediction of non-coding RNAs in Drosophila.
Robert K Bradley
Full Text Available We performed benchmarks of phylogenetic grammar-based ncRNA gene prediction, experimenting with eight different models of structural evolution and two different programs for genome alignment. We evaluated our models using alignments of twelve Drosophila genomes. We find that ncRNA prediction performance can vary greatly between different gene predictors and subfamilies of ncRNA gene. Our estimates for false positive rates are based on simulations which preserve local islands of conservation; using these simulations, we predict a higher rate of false positives than previous computational ncRNA screens have reported. Using one of the tested prediction grammars, we provide an updated set of ncRNA predictions for D. melanogaster and compare them to previously-published predictions and experimental data. Many of our predictions show correlations with protein-coding genes. We found significant depletion of intergenic predictions near the 3' end of coding regions and furthermore depletion of predictions in the first intron of protein-coding genes. Some of our predictions are colocated with larger putative unannotated genes: for example, 17 of our predictions showing homology to the RFAM family snoR28 appear in a tandem array on the X chromosome; the 4.5 Kbp spanned by the predicted tandem array is contained within a FlyBase-annotated cDNA.
The application of modeling and prediction with MRA wavelet network
LU Shu-ping; YANG Xue-jing; ZHAO Xi-ren
2004-01-01
As there are lots of non-linear systems in the real engineering, it is very important to do more researches on the modeling and prediction of non-linear systems. Based on the multi-resolution analysis (MRA) of wavelet theory, this paper combined the wavelet theory with neural network and established a MRA wavelet network with the scaling function and wavelet function as its neurons. From the analysis in the frequency domain, the results indicated that MRA wavelet network was better than other wavelet networks in the ability of approaching to the signals. An essential research was carried out on modeling and prediction with MRA wavelet network in the non-linear system. Using the lengthwise sway data received from the experiment of ship model, a model of offline prediction was established and was applied to the short-time prediction of ship motion. The simulation results indicated that the forecasting model improved the prediction precision effectively, lengthened the forecasting time and had a better prediction results than that of AR linear model.The research indicates that it is feasible to use the MRA wavelet network in the short -time prediction of ship motion.
Electric vehicle charge planning using Economic Model Predictive Control
Halvgaard, Rasmus; Poulsen, Niels K.; Madsen, Henrik
2012-01-01
Economic Model Predictive Control (MPC) is very well suited for controlling smart energy systems since electricity price and demand forecasts are easily integrated in the controller. Electric vehicles (EVs) are expected to play a large role in the future Smart Grid. They are expected to provide g...... should be consumed as soon as it is produced to avoid the need for energy storage as this is expensive, limited and introduces efficiency losses. The Economic MPC for EVs described in this paper may contribute to facilitating transition to a fossil free energy system.......Economic Model Predictive Control (MPC) is very well suited for controlling smart energy systems since electricity price and demand forecasts are easily integrated in the controller. Electric vehicles (EVs) are expected to play a large role in the future Smart Grid. They are expected to provide...... grid services, both for peak reduction and for ancillary services, by absorbing short term variations in the electricity production. In this paper the Economic MPC minimizes the cost of electricity consumption for a single EV. Simulations show savings of 50–60% of the electricity costs compared...
Predicting chick body mass by artificial intelligence-based models
Patricia Ferreira Ponciano Ferraz
2014-07-01
Full Text Available The objective of this work was to develop, validate, and compare 190 artificial intelligence-based models for predicting the body mass of chicks from 2 to 21 days of age subjected to different duration and intensities of thermal challenge. The experiment was conducted inside four climate-controlled wind tunnels using 210 chicks. A database containing 840 datasets (from 2 to 21-day-old chicks - with the variables dry-bulb air temperature, duration of thermal stress (days, chick age (days, and the daily body mass of chicks - was used for network training, validation, and tests of models based on artificial neural networks (ANNs and neuro-fuzzy networks (NFNs. The ANNs were most accurate in predicting the body mass of chicks from 2 to 21 days of age after they were subjected to the input variables, and they showed an R² of 0.9993 and a standard error of 4.62 g. The ANNs enable the simulation of different scenarios, which can assist in managerial decision-making, and they can be embedded in the heating control systems.
Zhao Haijun; Ma Yan; Huang Xiaohong; Su Yujie
2008-01-01
Predicting heartbeat message arrival time is crucial for the quality of failure detection service over internet. However, internet dynamic characteristics make it very difficult to understand message behavior and accurately predict heartbeat arrival time. To solve this problem, a novel black-box model is proposed to predict the next heartbeat arrival time. Heartbeat arrival time is modeled as auto-regressive process, heartbeat sending time is modeled as exogenous variable, the model's coefficients are estimated based on the sliding window of observations and this result is used to predict the next heartbeat arrival time. Simulation shows that this adaptive auto-regressive exogenous (ARX) model can accurately capture heartbeat arrival dynamics and minimize prediction error in different network environments.
Reducing Uncertainty in Chemistry Climate Model Predictions of Stratospheric Ozone
Douglass, A. R.; Strahan, S. E.; Oman, L. D.; Stolarski, R. S.
2014-01-01
Chemistry climate models (CCMs) are used to predict the future evolution of stratospheric ozone as ozone-depleting substances decrease and greenhouse gases increase, cooling the stratosphere. CCM predictions exhibit many common features, but also a broad range of values for quantities such as year of ozone-return-to-1980 and global ozone level at the end of the 21st century. Multiple linear regression is applied to each of 14 CCMs to separate ozone response to chlorine change from that due to climate change. We show that the sensitivity of lower atmosphere ozone to chlorine change deltaO3/deltaCly is a near linear function of partitioning of total inorganic chlorine (Cly) into its reservoirs; both Cly and its partitioning are controlled by lower atmospheric transport. CCMs with realistic transport agree with observations for chlorine reservoirs and produce similar ozone responses to chlorine change. After 2035 differences in response to chlorine contribute little to the spread in CCM results as the anthropogenic contribution to Cly becomes unimportant. Differences among upper stratospheric ozone increases due to temperature decreases are explained by differences in ozone sensitivity to temperature change deltaO3/deltaT due to different contributions from various ozone loss processes, each with their own temperature dependence. In the lower atmosphere, tropical ozone decreases caused by a predicted speed-up in the Brewer-Dobson circulation may or may not be balanced by middle and high latitude increases, contributing most to the spread in late 21st century predictions.
A hybrid model for predicting carbon monoxide from vehicular exhausts in urban environments
Gokhale, Sharad; Khare, Mukesh
Several deterministic-based air quality models evaluate and predict the frequently occurring pollutant concentration well but, in general, are incapable of predicting the 'extreme' concentrations. In contrast, the statistical distribution models overcome the above limitation of the deterministic models and predict the 'extreme' concentrations. However, the environmental damages are caused by both extremes as well as by the sustained average concentration of pollutants. Hence, the model should predict not only 'extreme' ranges but also the 'middle' ranges of pollutant concentrations, i.e. the entire range. Hybrid modelling is one of the techniques that estimates/predicts the 'entire range' of the distribution of pollutant concentrations by combining the deterministic based models with suitable statistical distribution models ( Jakeman, et al., 1988). In the present paper, a hybrid model has been developed to predict the carbon monoxide (CO) concentration distributions at one of the traffic intersections, Income Tax Office (ITO), in the Delhi city, where the traffic is heterogeneous in nature and meteorology is 'tropical'. The model combines the general finite line source model (GFLSM) as its deterministic, and log logistic distribution (LLD) model, as its statistical components. The hybrid (GFLSM-LLD) model is then applied at the ITO intersection. The results show that the hybrid model predictions match with that of the observed CO concentration data within the 5-99 percentiles range. The model is further validated at different street location, i.e. Sirifort roadway. The validation results show that the model predicts CO concentrations fairly well ( d=0.91) in 10-95 percentiles range. The regulatory compliance is also developed to estimate the probability of exceedance of hourly CO concentration beyond the National Ambient Air Quality Standards (NAAQS) of India. It consists of light vehicles, heavy vehicles, three- wheelers (auto rickshaws) and two
Predicting Market Impact Costs Using Nonparametric Machine Learning Models.
Saerom Park
Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models.
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.
New Approaches for Channel Prediction Based on Sinusoidal Modeling
Ekman Torbjörn
2007-01-01
Full Text Available Long-range channel prediction is considered to be one of the most important enabling technologies to future wireless communication systems. The prediction of Rayleigh fading channels is studied in the frame of sinusoidal modeling in this paper. A stochastic sinusoidal model to represent a Rayleigh fading channel is proposed. Three different predictors based on the statistical sinusoidal model are proposed. These methods outperform the standard linear predictor (LP in Monte Carlo simulations, but underperform with real measurement data, probably due to nonstationary model parameters. To mitigate these modeling errors, a joint moving average and sinusoidal (JMAS prediction model and the associated joint least-squares (LS predictor are proposed. It combines the sinusoidal model with an LP to handle unmodeled dynamics in the signal. The joint LS predictor outperforms all the other sinusoidal LMMSE predictors in suburban environments, but still performs slightly worse than the standard LP in urban environments.
Prediction model for spring dust weather frequency in North China
LANG XianMei
2008-01-01
It is of great social and scientific importance and also very difficult to make reliable prediction for dust weather frequency (DWF) in North China. In this paper, the correlation between spring DWF in Beijing and Tianjin observation stations, taken as examples in North China, and seasonally averaged surface air temperature, precipitation, Arctic Oscillation, Antarctic Oscillation, South Oscillation, near surface meridional wind and Eurasian westerly index is respectively calculated so as to construct a prediction model for spring DWF in North China by using these climatic factors. Two prediction models, I.e. Model-Ⅰ and model-Ⅱ, are then set up respectively based on observed climate data and the 32-year (1970--2001) extra-seasonal hindcast experiment data as reproduced by the nine-level Atmospheric General Circulation Model developed at the Institute of Atmospheric Physics (IAP9L-AGCM). It is indicated that the correlation coefficient between the observed and predicted DWF reaches 0.933 in the model-Ⅰ, suggesting a high prediction skill one season ahead. The corresponding value is high up to 0.948 for the subsequent model-Ⅱ, which involves synchronous spring climate data reproduced by the IAP9L-AGCM relative to the model-Ⅰ. The model-Ⅱ can not only make more precise prediction but also can bring forward the lead time of real-time prediction from the model-Ⅰ's one season to half year. At last, the real-time predictability of the two models is evaluated. It follows that both the models display high prediction skill for both the interannual variation and linear trend of spring DWF in North China, and each is also featured by different advantages. As for the model-Ⅱ, the prediction skill is much higher than that of original approach by use of the IAP9L-AGCM alone. Therefore, the prediction idea put forward here should be popularized in other regions in China where dust weather occurs frequently.
Prediction model for spring dust weather frequency in North China
2008-01-01
It is of great social and scientific importance and also very difficult to make reliable prediction for dust weather frequency (DWF) in North China. In this paper, the correlation between spring DWF in Beijing and Tianjin observation stations, taken as examples in North China, and seasonally averaged surface air temperature, precipitation, Arctic Oscillation, Antarctic Oscillation, South Oscillation, near surface meridional wind and Eurasian westerly index is respectively calculated so as to construct a prediction model for spring DWF in North China by using these climatic factors. Two prediction models, i.e. model-I and model-II, are then set up respectively based on observed climate data and the 32-year (1970 -2001) extra-seasonal hindcast experiment data as reproduced by the nine-level Atmospheric General Circulation Model developed at the Institute of Atmospheric Physics (IAP9L-AGCM). It is indicated that the correlation coefficient between the observed and predicted DWF reaches 0.933 in the model-I, suggesting a high prediction skill one season ahead. The corresponding value is high up to 0.948 for the subsequent model-II, which involves synchronous spring climate data reproduced by the IAP9L-AGCM relative to the model-I. The model-II can not only make more precise prediction but also can bring forward the lead time of real-time prediction from the model-I’s one season to half year. At last, the real-time predictability of the two models is evaluated. It follows that both the models display high prediction skill for both the interannual variation and linear trend of spring DWF in North China, and each is also featured by different advantages. As for the model-II, the prediction skill is much higher than that of original approach by use of the IAP9L-AGCM alone. Therefore, the prediction idea put forward here should be popularized in other regions in China where dust weather occurs frequently.
Model Predictive Control of Sewer Networks
Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik;
2016-01-01
The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored...... and controlled have thus become essential factors for efficient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona...
A burnout prediction model based around char morphology
Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering
2006-05-15
Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.
An evaluation of mathematical models for predicting skin permeability.
Lian, Guoping; Chen, Longjian; Han, Lujia
2008-01-01
A number of mathematical models have been proposed for predicting skin permeability, mostly empirical and very few are deterministic. Early empirical models use simple lipophilicity parameters. The recent trend is to use more complicated molecular structure descriptors. There has been much debate on which models best predict skin permeability. This article evaluates various mathematical models using a comprehensive experimental dataset of skin permeability for 124 chemical compounds compiled from various sources. Of the seven models compared, the deterministic model of Mitragotri gives the best prediction. The simple quantitative structure permeability relationships (QSPR) model of Potts and Guy gives the second best prediction. The two models have many features in common. Both assume the lipid matrix as the pathway of transdermal permeation. Both use octanol-water partition coefficient and molecular size. Even the mathematical formulae are similar. All other empirical QSPR models that use more complicated molecular structure descriptors fail to provide satisfactory prediction. The molecular structure descriptors in the more complicated QSPR models are empirically related to skin permeation. The mechanism on how these descriptors affect transdermal permeation is not clear. Mathematically it is an ill-defined approach to use many colinearly related parameters rather than fewer independent parameters in multi-linear regression.
Bayesian Age-Period-Cohort Modeling and Prediction - BAMP
Volker J. Schmid
2007-10-01
Full Text Available The software package BAMP provides a method of analyzing incidence or mortality data on the Lexis diagram, using a Bayesian version of an age-period-cohort model. A hierarchical model is assumed with a binomial model in the first-stage. As smoothing priors for the age, period and cohort parameters random walks of first and second order, with and without an additional unstructured component are available. Unstructured heterogeneity can also be included in the model. In order to evaluate the model fit, posterior deviance, DIC and predictive deviances are computed. By projecting the random walk prior into the future, future death rates can be predicted.
A simple physical model predicts small exon length variations.
2006-04-01
Full Text Available One of the most common splice variations are small exon length variations caused by the use of alternative donor or acceptor splice sites that are in very close proximity on the pre-mRNA. Among these, three-nucleotide variations at so-called NAGNAG tandem acceptor sites have recently attracted considerable attention, and it has been suggested that these variations are regulated and serve to fine-tune protein forms by the addition or removal of a single amino acid. In this paper we first show that in-frame exon length variations are generally overrepresented and that this overrepresentation can be quantitatively explained by the effect of nonsense-mediated decay. Our analysis allows us to estimate that about 50% of frame-shifted coding transcripts are targeted by nonsense-mediated decay. Second, we show that a simple physical model that assumes that the splicing machinery stochastically binds to nearby splice sites in proportion to the affinities of the sites correctly predicts the relative abundances of different small length variations at both boundaries. Finally, using the same simple physical model, we show that for NAGNAG sites, the difference in affinities of the neighboring sites for the splicing machinery accurately predicts whether splicing will occur only at the first site, splicing will occur only at the second site, or three-nucleotide splice variants are likely to occur. Our analysis thus suggests that small exon length variations are the result of stochastic binding of the spliceosome at neighboring splice sites. Small exon length variations occur when there are nearby alternative splice sites that have similar affinity for the splicing machinery.
Validation of a tuber blight (Phytophthora infestans) prediction model
Potato tuber blight caused by Phytophthora infestans accounts for significant losses in storage. There is limited published quantitative data on predicting tuber blight. We validated a tuber blight prediction model developed in New York with cultivars Allegany, NY 101, and Katahdin using independent...
Prediction of bypass transition with differential Reynolds stress models
Westin, K.J.A.; Henkes, R.A.W.M.
1998-01-01
Boundary layer transition induced by high levels of free stream turbulence (FSl), so called bypass transition, can not be predicted with conventional stability calculations (e.g. the en-method). The use of turbulence models for transition prediction has shown some success for this type of flows, and
Prediction Models of Free-Field Vibrations from Railway Traffic
Malmborg, Jens; Persson, Kent; Persson, Peter
2017-01-01
and railways close to where people work and live. Annoyance from traffic-induced vibrations and noise is expected to be a growing issue. To predict the level of vibration and noise in buildings caused by railway and road traffic, calculation models are needed. In the present paper, a simplified prediction...
A new ensemble model for short term wind power prediction
Madsen, Henrik; Albu, Razvan-Daniel; Felea, Ioan
2012-01-01
As the objective of this study, a non-linear ensemble system is used to develop a new model for predicting wind speed in short-term time scale. Short-term wind power prediction becomes an extremely important field of research for the energy sector. Regardless of the recent advancements in the re-search...
Space Weather: Measurements, Models and Predictions
2014-03-21
and record high levels of cosmic ray flux. There were broad-ranging terrestrial responses to this inactivity of the Sun. BC was involved in the...techniques for converting from one coordinate system (e.g., the invariant coordinate system used for the model) to another (e.g., the latitude- radius
Monotone models for prediction in data mining
Velikova, M.V.
2006-01-01
This dissertation studies the incorporation of monotonicity constraints as a type of domain knowledge into a data mining process. Monotonicity constraints are enforced at two stages¿data preparation and data modeling. The main contributions of the research are a novel procedure to test the degree of
Predicting Magazine Audiences with a Loglinear Model.
1987-07-01
important use of e.d. estimates is in media selection ( Aaker 1975; Lee 1962, 1963; Little and Lodish 1969). All advertising campaigns have a budget. It...N.Z. Listener 6061 39.0 4 0 22 References Aaker , D.A. (1975), "ADMOD:An Advertising Decision Model," Journal of Marketing Research, February, 37-45
Stand diameter distribution modelling and prediction based on Richards function.
Ai-guo Duan
Full Text Available The objective of this study was to introduce application of the Richards equation on modelling and prediction of stand diameter distribution. The long-term repeated measurement data sets, consisted of 309 diameter frequency distributions from Chinese fir (Cunninghamia lanceolata plantations in the southern China, were used. Also, 150 stands were used as fitting data, the other 159 stands were used for testing. Nonlinear regression method (NRM or maximum likelihood estimates method (MLEM were applied to estimate the parameters of models, and the parameter prediction method (PPM and parameter recovery method (PRM were used to predict the diameter distributions of unknown stands. Four main conclusions were obtained: (1 R distribution presented a more accurate simulation than three-parametric Weibull function; (2 the parameters p, q and r of R distribution proved to be its scale, location and shape parameters, and have a deep relationship with stand characteristics, which means the parameters of R distribution have good theoretical interpretation; (3 the ordinate of inflection point of R distribution has significant relativity with its skewness and kurtosis, and the fitted main distribution range for the cumulative diameter distribution of Chinese fir plantations was 0.4∼0.6; (4 the goodness-of-fit test showed diameter distributions of unknown stands can be well estimated by applying R distribution based on PRM or the combination of PPM and PRM under the condition that only quadratic mean DBH or plus stand age are known, and the non-rejection rates were near 80%, which are higher than the 72.33% non-rejection rate of three-parametric Weibull function based on the combination of PPM and PRM.
Stand diameter distribution modelling and prediction based on Richards function.
Duan, Ai-guo; Zhang, Jian-guo; Zhang, Xiong-qing; He, Cai-yun
2013-01-01
The objective of this study was to introduce application of the Richards equation on modelling and prediction of stand diameter distribution. The long-term repeated measurement data sets, consisted of 309 diameter frequency distributions from Chinese fir (Cunninghamia lanceolata) plantations in the southern China, were used. Also, 150 stands were used as fitting data, the other 159 stands were used for testing. Nonlinear regression method (NRM) or maximum likelihood estimates method (MLEM) were applied to estimate the parameters of models, and the parameter prediction method (PPM) and parameter recovery method (PRM) were used to predict the diameter distributions of unknown stands. Four main conclusions were obtained: (1) R distribution presented a more accurate simulation than three-parametric Weibull function; (2) the parameters p, q and r of R distribution proved to be its scale, location and shape parameters, and have a deep relationship with stand characteristics, which means the parameters of R distribution have good theoretical interpretation; (3) the ordinate of inflection point of R distribution has significant relativity with its skewness and kurtosis, and the fitted main distribution range for the cumulative diameter distribution of Chinese fir plantations was 0.4∼0.6; (4) the goodness-of-fit test showed diameter distributions of unknown stands can be well estimated by applying R distribution based on PRM or the combination of PPM and PRM under the condition that only quadratic mean DBH or plus stand age are known, and the non-rejection rates were near 80%, which are higher than the 72.33% non-rejection rate of three-parametric Weibull function based on the combination of PPM and PRM.
[Developing a predictive model for the caregiver strain index].
Álvarez-Tello, Margarita; Casado-Mejía, Rosa; Praena-Fernández, Juan Manuel; Ortega-Calvo, Manuel
Patient homecare with multiple morbidities is an increasingly common occurrence. The caregiver strain index is tool in the form of questionnaire that is designed to measure the perceived burden of those who care for their families. The aim of this study is to construct a diagnostic nomogram of informal caregiver burden using data from a predictive model. The model was drawn up using binary logistic regression and the questionnaire items as dichotomous factors. The dependent variable was the final score obtained with the questionnaire but categorised in accordance with that in the literature. Scores between 0 and 6 were labelled as "no" (no caregiver stress) and at or greater than 7 as "yes". The version 3.1.1R statistical software was used. To construct confidence intervals for the ROC curve 2000 boot strap replicates were used. A sample of 67 caregivers was obtained. A diagnosing nomogram was made up with its calibration graph (Brier scaled = 0.686, Nagelkerke R(2)=0.791), and the corresponding ROC curve (area under the curve=0.962). The predictive model generated using binary logistic regression and the nomogram contain four items (1, 4, 5 and 9) of the questionnaire. R plotting functions allow a very good solution for validating a model like this. The area under the ROC curve (0.96; 95% CI: 0.994-0.941) achieves a high discriminative value. Calibration also shows high goodness of fit values, suggesting that it may be clinically useful in community nursing and geriatric establishments. Copyright © 2015 SEGG. Publicado por Elsevier España, S.L.U. All rights reserved.
Yan, Haofang; Shi, Haibin; Hiroki, Oue; Zhang, Chuan; Xue, Zhu; Cai, Bin; Wang, Guoqing
2015-06-01
This study presents models for predicting hourly canopy resistance ( r c) and evapotranspiration (ETc) based on Penman-Monteith approach. The micrometeorological data and ET c were observed during maize and buckwheat growing seasons in 2006 and 2009 in China and Japan, respectively. The proposed models of r c were developed by a climatic resistance ( r *) that depends on climatic variables. Non-linear relationships between r c and r * were applied. The measured ETc using Bowen ratio energy balance method was applied for model validation. The statistical analysis showed that there were no significant differences between predicted ETc by proposed models and measured ETc for both maize and buckwheat crops. The model for predicting ETc at maize field showed better performance than predicting ETc at buckwheat field, the coefficients of determination were 0.92 and 0.84, respectively. The study provided an easy way for the application of Penman-Monteith equation with only general available meteorological database.
Planetary Suit Hip Bearing Model for Predicting Design vs. Performance
Cowley, Matthew S.; Margerum, Sarah; Harvil, Lauren; Rajulu, Sudhakar
2011-01-01
, the suited performance trends were comparable between the model and the suited subjects. With the three off-nominal bearing configurations compared to the nominal bearing configurations, human subjects showed decreases in hip flexion of 64%, 6%, and 13% and in hip abduction of 59%, 2%, and 20%. Likewise the solid model showed decreases in hip flexion of 58%, 1%, and 25% and in hip abduction of 56%, 0%, and 30%, under the same condition changes from the nominal configuration. Differences seen between the model predictions and the human subject performance data could be attributed to the model lacking dynamic elements and performing kinematic analysis only, the level of fit of the subjects with the suit, the levels of the subject s suit experience.
Theoretical model for forming limit diagram predictions without initial inhomogeneity
Gologanu, Mihai; Comsa, Dan Sorin; Banabic, Dorel
2013-05-01
the plane-strain case the limit-analysis model predicts almost instantaneous necking but in the next step the virtual band hardens enough to deactivate the localization condition. In this case we apply a supplementary condition for incipient necking similar to the one used in Hill's model for the second quadrant. We show that this condition is precisely the one for incipient bifurcation inside the virtual (and weaker) band. Finally we discuss some limitations, extensions and possible applications of the new necking model based on limit analysis.
Scanpath Based N-Gram Models for Predicting Reading Behavior
Mishra, Abhijit; Bhattacharyya, Pushpak; Carl, Michael
2013-01-01
Predicting reading behavior is a difficult task. Reading behavior depends on various linguistic factors (e.g. sentence length, structural complexity etc.) and other factors (e.g individual's reading style, age etc.). Ideally, a reading model should be similar to a language model where the model i...
Better predictions when models are wrong or underspecified
Ommen, Matthijs van
2015-01-01
Many statistical methods rely on models of reality in order to learn from data and to make predictions about future data. By necessity, these models usually do not match reality exactly, but are either wrong (none of the hypotheses in the model provides an accurate description of reality) or undersp
Prediction of selected Indian stock using a partitioning–interpolation based ARIMA–GARCH model
C. Narendra Babu
2015-07-01
Full Text Available Accurate long-term prediction of time series data (TSD is a very useful research challenge in diversified fields. As financial TSD are highly volatile, multi-step prediction of financial TSD is a major research problem in TSD mining. The two challenges encountered are, maintaining high prediction accuracy and preserving the data trend across the forecast horizon. The linear traditional models such as autoregressive integrated moving average (ARIMA and generalized autoregressive conditional heteroscedastic (GARCH preserve data trend to some extent, at the cost of prediction accuracy. Non-linear models like ANN maintain prediction accuracy by sacrificing data trend. In this paper, a linear hybrid model, which maintains prediction accuracy while preserving data trend, is proposed. A quantitative reasoning analysis justifying the accuracy of proposed model is also presented. A moving-average (MA filter based pre-processing, partitioning and interpolation (PI technique are incorporated by the proposed model. Some existing models and the proposed model are applied on selected NSE India stock market data. Performance results show that for multi-step ahead prediction, the proposed model outperforms the others in terms of both prediction accuracy and preserving data trend.
Hybrid Corporate Performance Prediction Model Considering Technical Capability
Joonhyuck Lee
2016-07-01
Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.
On the Predictiveness of Single-Field Inflationary Models
Burgess, C.P.; Trott, Michael
2014-01-01
We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for $A_s$, $r$ and $n_s$ are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in prin...
A Composite Model Predictive Control Strategy for Furnaces
Hao Zang; Hongguang Li; Jingwen Huang; Jia Wang
2014-01-01
Tube furnaces are essential and primary energy intensive facilities in petrochemical plants. Operational optimi-zation of furnaces could not only help to improve product quality but also benefit to reduce energy consumption and exhaust emission. Inspired by this idea, this paper presents a composite model predictive control (CMPC) strategy, which, taking advantage of distributed model predictive control architectures, combines tracking nonlinear model predictive control and economic nonlinear model predictive control metrics to keep process running smoothly and optimize operational conditions. The control ers connected with two kinds of communi-cation networks are easy to organize and maintain, and stable to process interferences. A fast solution algorithm combining interior point solvers and Newton's method is accommodated to the CMPC realization, with reason-able CPU computing time and suitable online applications. Simulation for industrial case demonstrates that the proposed approach can ensure stable operations of furnaces, improve heat efficiency, and reduce the emission effectively.
Submission Form for Peer-Reviewed Cancer Risk Prediction Models
If you have information about a peer-reviewd cancer risk prediction model that you would like to be considered for inclusion on this list, submit as much information as possible through the form on this page.
ACCIDENT PREDICTION MODELS FOR UNSIGNALISED URBAN JUNCTIONS IN GHANA
Mohammed SALIFU, MSc., PhD, MIHT, MGhIE
2004-01-01
The accident prediction models developed have a potentially wide area of application and their systematic use is likely to improve considerably the quality and delivery of the engineering aspects of accident mitigation and prevention in Ghana.
Using a Prediction Model to Manage Cyber Security Threats
Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya
2015-01-01
.... The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security...
Development of a multi-year climate prediction model | Alexander ...
Development of a multi-year climate prediction model. ... The available water resources in Southern Africa are rapidly approaching the limits of economic exploitation. ... that could be attributed to climate change arising from human activities.
Compensatory versus noncompensatory models for predicting consumer preferences
Anja Dieckmann
2009-04-01
Full Text Available Standard preference models in consumer research assume that people weigh and add all attributes of the available options to derive a decision, while there is growing evidence for the use of simplifying heuristics. Recently, a greedoid algorithm has been developed (Yee, Dahan, Hauser and Orlin, 2007; Kohli and Jedidi, 2007 to model lexicographic heuristics from preference data. We compare predictive accuracies of the greedoid approach and standard conjoint analysis in an online study with a rating and a ranking task. The lexicographic model derived from the greedoid algorithm was better at predicting ranking compared to rating data, but overall, it achieved lower predictive accuracy for hold-out data than the compensatory model estimated by conjoint analysis. However, a considerable minority of participants was better predicted by lexicographic strategies. We conclude that the new algorithm will not replace standard tools for analyzing preferences, but can boost the study of situational and individual differences in preferential choice processes.
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction
Bandeira e Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-01-01
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. PMID:28455415
Haskell financial data modeling and predictive analytics
Ryzhov, Pavel
2013-01-01
This book is a hands-on guide that teaches readers how to use Haskell's tools and libraries to analyze data from real-world sources in an easy-to-understand manner.This book is great for developers who are new to financial data modeling using Haskell. A basic knowledge of functional programming is not required but will be useful. An interest in high frequency finance is essential.
Mesoscale Wind Predictions for Wave Model Evaluation
2016-06-07
N0001400WX20041(B) http://www.nrlmry.navy.mil LONG TERM GOALS The long-term goal is to demonstrate the significance and importance of high...ocean waves by an appropriate wave model. OBJECTIVES The main objectives of this project are to: 1. Build the infrastructure to generate the...temperature for all COAMPS grids at the resolution of each of these grids. These analyses are important for the proper 2 specification of the lower
Nicolas, B.; Gilbert, M. E.; Paw U, K. T.
2015-12-01
Soil-Vegetation-Atmosphere Transfer (SVAT) models are based upon well understood steady state photosynthetic physiology - the Farquhar-von Caemmerer-Berry model (FvCB). However, representations of physiological stress and damage have not been successfully integrated into SVAT models. Generally, it has been assumed that plants will strive to conserve water at higher temperatures by reducing stomatal conductance or adjusting osmotic balance, until potentially damaging temperatures and the need for evaporative cooling become more important than water conservation. A key point is that damage is the result of combined stresses: drought leads to stomatal closure, less evaporative cooling, high leaf temperature, less photosynthetic dissipation of absorbed energy, all coupled with high light (photosynthetic photon flux density; PPFD). This leads to excess absorbed energy by Photosystem II (PSII) and results in photoinhibition and damage, neither are included in SVAT models. Current representations of photoinhibition are treated as a function of PPFD, not as a function of constrained photosynthesis under heat or water. Thus, it seems unlikely that current models can predict responses of vegetation to climate variability and change. We propose a dynamic model of damage to Rubisco and RuBP-regeneration that accounts, mechanistically, for the interactions between high temperature, light, and constrained photosynthesis under drought. Further, these predictions are illustrated by key experiments allowing model validation. We also integrated this new framework within the Advanced Canopy-Atmosphere-Soil Algorithm (ACASA). Preliminary results show that our approach can be used to predict reasonable photosynthetic dynamics. For instances, a leaf undergoing one day of drought stress will quickly decrease its maximum quantum yield of PSII (Fv/Fm), but it won't recover to unstressed levels for several days. Consequently, cumulative effect of photoinhibition on photosynthesis can cause
Identifying Spatially Variable Sensitivity of Model Predictions and Calibrations
McKenna, S. A.; Hart, D. B.
2005-12-01
Stochastic inverse modeling provides an ensemble of stochastic property fields, each calibrated to measured steady-state and transient head data. These calibrated fields are used as input for predictions of other processes (e.g., contaminant transport, advective travel time). Use of the entire ensemble of fields transfers spatial uncertainty in hydraulic properties to uncertainty in the predicted performance measures. A sampling-based sensitivity coefficient is proposed to determine the sensitivity of the performance measures to the uncertain values of hydraulic properties at every cell in the model domain. The basis of this sensitivity coefficient is the Spearman rank correlation coefficient. Sampling-based sensitivity coefficients are demonstrated using a recent set of transmissivity (T) fields created through a stochastic inverse calibration process for the Culebra dolomite in the vicinity of the WIPP site in southeastern New Mexico. The stochastic inverse models were created using a unique approach to condition a geologically-based conceptual model of T to measured T values via a multiGaussian residual field. This field is calibrated to both steady-state and transient head data collected over an 11 year period. Maps of these sensitivity coefficients provide a means of identifying the locations in the study area to which both the value of the model calibration objective function and the predicted travel times to a regulatory boundary are most sensitive to the T and head values. These locations can be targeted for deployment of additional long-term monitoring resources. Comparison of areas where the calibration objective function and the travel time have high sensitivity shows that these are not necessarily coincident with regions of high uncertainty. The sampling-based sensitivity coefficients are compared to analytically derived sensitivity coefficients at the 99 pilot point locations. Results of the sensitivity mapping exercise are being used in combination
Modeling Seizure Self-Prediction: An E-Diary Study
Haut, Sheryl R.; Hall, Charles B.; Borkowski, Thomas; Tennen, Howard; Lipton, Richard B.
2013-01-01
Purpose A subset of patients with epilepsy successfully self-predicted seizures in a paper diary study. We conducted an e-diary study to ensure that prediction precedes seizures, and to characterize the prodromal features and time windows that underlie self-prediction. Methods Subjects 18 or older with LRE and ≥3 seizures/month maintained an e-diary, reporting AM/PM data daily, including mood, premonitory symptoms, and all seizures. Self-prediction was rated by, “How likely are you to experience a seizure [time frame]”? Five choices ranged from almost certain (>95% chance) to very unlikely. Relative odds of seizure (OR) within time frames was examined using Poisson models with log normal random effects to adjust for multiple observations. Key Findings Nineteen subjects reported 244 eligible seizures. OR for prediction choices within 6hrs was as high as 9.31 (1.92,45.23) for “almost certain”. Prediction was most robust within 6hrs of diary entry, and remained significant up to 12hrs. For 9 best predictors, average sensitivity was 50%. Older age contributed to successful self-prediction, and self-prediction appeared to be driven by mood and premonitory symptoms. In multivariate modeling of seizure occurrence, self-prediction (2.84; 1.68,4.81), favorable change in mood (0.82; 0.67,0.99) and number of premonitory symptoms (1,11; 1.00,1.24) were significant. Significance Some persons with epilepsy can self-predict seizures. In these individuals, the odds of a seizure following a positive prediction are high. Predictions were robust, not attributable to recall bias, and were related to self awareness of mood and premonitory features. The 6-hour prediction window is suitable for the development of pre-emptive therapy. PMID:24111898
Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms
Mirkovic, Djordje; Stepanian, Phillip M.; Kelly, Jeffrey F.; Chilson, Phillip B.
2016-10-01
The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification.
Solubility Prediction of Active Pharmaceutical Compounds with the UNIFAC Model
Nouar, Abderrahim; Benmessaoud, Ibtissem; Koutchoukali, Ouahiba; Koutchoukali, Mohamed Salah
2016-03-01
The crystallization from solution of an active pharmaceutical ingredient requires the knowledge of the solubility in the entire temperature range investigated during the process. However, during the development of a new active ingredient, these data are missing. Its experimental determination is possible, but tedious. UNIFAC Group contribution method Fredenslund et al. (Vapor-liquid equilibria using UNIFAC: a group contribution method, 1977; AIChE J 21:1086, 1975) can be used to predict this physical property. Several modifications on this model have been proposed since its development in 1977, modified UNIFAC of Dortmund Weidlich et al. (Ind Eng Chem Res 26:1372, 1987), Gmehling et al. (Ind Eng Chem Res 32:178, 1993), Pharma-modified UNIFAC Diedrichs et al. (Evaluation und Erweiterung thermodynamischer Modelle zur Vorhersage von Wirkstofflöslichkeiten, PhD Thesis, 2010), KT-UNIFAC Kang et al. (Ind Eng Chem Res 41:3260, 2002), ldots In this study, we used UNIFAC model by considering the linear temperature dependence of interaction parameters as in Pharma-modified UNIFAC and structural groups as defined by KT-UNIFAC first-order model. More than 100 binary datasets were involved in the estimation of interaction parameters. These new parameters were then used to calculate activity coefficient and solubility of some molecules in various solvents at different temperatures. The model gives better results than those from the original UNIFAC and shows good agreement between the experimental solubility and the calculated one.
Predicting chromatin architecture from models of polymer physics.
Bianco, Simona; Chiariello, Andrea M; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario
2017-01-09
We review the picture of chromatin large-scale 3D organization emerging from the analysis of Hi-C data and polymer modeling. In higher mammals, Hi-C contact maps reveal a complex higher-order organization, extending from the sub-Mb to chromosomal scales, hierarchically folded in a structure of domains-within-domains (metaTADs). The domain folding hierarchy is partially conserved throughout differentiation, and deeply correlated to epigenomic features. Rearrangements in the metaTAD topology relate to gene expression modifications: in particular, in neuronal differentiation models, topologically associated domains (TADs) tend to have coherent expression changes within architecturally conserved metaTAD niches. To identify the nature of architectural domains and their molecular determinants within a principled approach, we discuss models based on polymer physics. We show that basic concepts of interacting polymer physics explain chromatin spatial organization across chromosomal scales and cell types. The 3D structure of genomic loci can be derived with high accuracy and its molecular determinants identified by crossing information with epigenomic databases. In particular, we illustrate the case of the Sox9 locus, linked to human congenital disorders. The model in-silico predictions on the effects of genomic rearrangements are confirmed by available 5C data. That can help establishing new diagnostic tools for diseases linked to chromatin mis-folding, such as congenital disorders and cancer.
Model predictive control of P-time event graphs
Hamri, H.; Kara, R.; Amari, S.
2016-12-01
This paper deals with model predictive control of discrete event systems modelled by P-time event graphs. First, the model is obtained by using the dater evolution model written in the standard algebra. Then, for the control law, we used the finite-horizon model predictive control. For the closed-loop control, we used the infinite-horizon model predictive control (IH-MPC). The latter is an approach that calculates static feedback gains which allows the stability of the closed-loop system while respecting the constraints on the control vector. The problem of IH-MPC is formulated as a linear convex programming subject to a linear matrix inequality problem. Finally, the proposed methodology is applied to a transportation system.
Prediction of cloud droplet number in a general circulation model
Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States)
1996-04-01
We have applied the Colorado State University Regional Atmospheric Modeling System (RAMS) bulk cloud microphysics parameterization to the treatment of stratiform clouds in the National Center for Atmospheric Research Community Climate Model (CCM2). The RAMS predicts mass concentrations of cloud water, cloud ice, rain and snow, and number concnetration of ice. We have introduced the droplet number conservation equation to predict droplet number and it`s dependence on aerosols.
Using connectome-based predictive modeling to predict individual behavior from brain connectivity.
Shen, Xilin; Finn, Emily S; Scheinost, Dustin; Rosenberg, Monica D; Chun, Marvin M; Papademetris, Xenophon; Constable, R Todd
2017-03-01
Neuroimaging is a fast-developing research area in which anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale data sets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: (i) feature selection, (ii) feature summarization, (iii) model building, and (iv) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a considerable amount of the variance in these measures. It has been demonstrated that the CPM protocol performs as well as or better than many of the existing approaches in brain-behavior prediction. As CPM focuses on linear modeling and a purely data-driven approach, neuroscientists with limited or no experience in machine learning or optimization will find it easy to implement these protocols. Depending on the volume of data to be processed, the protocol can take 10-100 min for model building, 1-48 h for permutation testing, and 10-20 min for visualization of results.
Mixed models for predictive modeling in actuarial science
Antonio, K.; Zhang, Y.
2012-01-01
We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques
Tollenaar, N.; Van der Heijden, P.G.M.|info:eu-repo/dai/nl/073087998
2013-01-01
Using criminal population criminal conviction history information, prediction models are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining
Tollenaar, N.; Van der Heijden, P.G.M.
2013-01-01
Using criminal population criminal conviction history information, prediction models are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining
Catalytic cracking models developed for predictive control purposes
Dag Ljungqvist
1993-04-01
Full Text Available The paper deals with state-space modeling issues in the context of model-predictive control, with application to catalytic cracking. Emphasis is placed on model establishment, verification and online adjustment. Both the Fluid Catalytic Cracking (FCC and the Residual Catalytic Cracking (RCC units are discussed. Catalytic cracking units involve complex interactive processes which are difficult to operate and control in an economically optimal way. The strong nonlinearities of the FCC process mean that the control calculation should be based on a nonlinear model with the relevant constraints included. However, the model can be simple compared to the complexity of the catalytic cracking plant. Model validity is ensured by a robust online model adjustment strategy. Model-predictive control schemes based on linear convolution models have been successfully applied to the supervisory dynamic control of catalytic cracking units, and the control can be further improved by the SSPC scheme.
Multi input single output model predictive control of non-linear bio-polymerization process
Arumugasamy, Senthil Kumar; Ahmad, Z. [School of Chemical Engineering, Univerisiti Sains Malaysia, Engineering Campus, Seri Ampangan,14300 Nibong Tebal, Seberang Perai Selatan, Pulau Pinang (Malaysia)
2015-05-15
This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state space model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.
A PSO-SVM Model for Short-Term Travel Time Prediction Based on Bluetooth Technology
Qun Wang; Zhuyun Liu; Zhongren Peng
2015-01-01
The accurate prediction of travel time along roadway provides valuable traffic information for travelers and traffic managers. Aiming at short⁃term travel time forecasting on urban arterials, a prediction model ( PSO⁃SVM) combining support vector machine ( SVM) and particle swarm optimization ( PSO) is developed. Travel time data collected with Bluetooth devices are used to calibrate the proposed model. Field experiments show that the PSO⁃SVM model ’ s error indicators are lower than the single SVM model and the BP neural network (BPNN)model. Particularly, the mean⁃absolute percentage error (MAPE) of PSO⁃SVM is only 9�453 4 %which is less than that of the single SVM model ( 12�230 2 %) and the BPNN model ( 15�314 7 %) . The results indicate that the proposed PSO⁃SVM model is feasible and more effective than other models for short⁃term travel time prediction on urban arterials.
Prediction limits of mobile phone activity modelling
Kondor, Dániel; Grauwin, Sebastian; Kallus, Zsófia; Gódor, István; Sobolevsky, Stanislav; Ratti, Carlo
2017-02-01
Thanks to their widespread usage, mobile devices have become one of the main sensors of human behaviour and digital traces left behind can be used as a proxy to study urban environments. Exploring the nature of the spatio-temporal patterns of mobile phone activity could thus be a crucial step towards understanding the full spectrum of human activities. Using 10 months of mobile phone records from Greater London resolved in both space and time, we investigate the regularity of human telecommunication activity on urban scales. We evaluate several options for decomposing activity timelines into typical and residual patterns, accounting for the strong periodic and seasonal components. We carry out our analysis on various spatial scales, showing that regularity increases as we look at aggregated activity in larger spatial units with more activity in them. We examine the statistical properties of the residuals and show that it can be explained by noise and specific outliers. Also, we look at sources of deviations from the general trends, which we find to be explainable based on knowledge of the city structure and places of attractions. We show examples how some of the outliers can be related to external factors such as specific social events.
Lippa, Richard A
2013-02-01
Do self-identified bisexual men and women actually show bisexual patterns of sexual attraction and interest? To answer this question, I studied bisexual men's and women's sexual attraction to photographed male and female "swimsuit models" that varied in attractiveness. Participants (663 college students and gay pride attendees, including 14 self-identified bisexual men and 17 self-identified bisexual women) rated their degree of sexual attraction to 34 male and 34 female swimsuit models. Participants' viewing times to models were unobtrusively assessed. Results showed that bisexual men and women showed bisexual patterns of attraction and viewing times to photo models, which strongly distinguished them from same-sex heterosexual and homosexual participants. In contrast to other groups, which showed evidence of greater male than female category specificity, bisexual men and women did not differ in category specificity. Results suggest that there are subsets of men and women who display truly bisexual patterns of sexual attraction and interest.
Technical note: A linear model for predicting δ13 Cprotein.
Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M
2015-08-01
Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2) = 0.86, P < 0.01), and experimentally generated error terms of ±1.9% for any predicted individual value of δ(13) Cprotein . This model was tested using isotopic data from Formative Period individuals from northern Chile's Atacama Desert. The model presented here appears to hold significant potential for the prediction of the carbon isotope signature of dietary protein using only such data as is routinely generated in the course of stable isotope analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.
Traffic Prediction Scheme based on Chaotic Models in Wireless Networks
Xiangrong Feng
2013-09-01
Full Text Available Based on the local support vector algorithm of chaotic time series analysis, the Hannan-Quinn information criterion and SAX symbolization are introduced. Then a novel prediction algorithm is proposed, which is successfully applied to the prediction of wireless network traffic. For the correct prediction problems of short-term flow with smaller data set size, the weakness of the algorithms during model construction is analyzed by study and comparison to LDK prediction algorithm. It is verified the Hannan-Quinn information principle can be used to calculate the number of neighbor points to replace pervious empirical method, which uses the number of neighbor points to acquire more accurate prediction model. Finally, actual flow data is applied to confirm the accuracy rate of the proposed algorithm LSDHQ. It is testified by our experiments that it also has higher performance in adaptability than that of LSDHQ algorithm.
Snopkowski, Kristin; Moya, Cristina; Sear, Rebecca
2014-08-07
Menopause remains an evolutionary puzzle, as humans are unique among primates in having a long post-fertile lifespan. One model proposes that intergenerational conflict in patrilocal populations favours female reproductive cessation. This model predicts that women should experience menopause earlier in groups with an evolutionary history of patrilocality compared with matrilocal groups. Using data from the Indonesia Family Life Survey, we test this model at multiple timescales: deep historical time, comparing age at menopause in ancestrally patrilocal Chinese Indonesians with ancestrally matrilocal Austronesian Indonesians; more recent historical time, comparing age at menopause in ethnic groups with differing postmarital residence within Indonesia and finally, analysing age at menopause at an individual-level, assuming a woman facultatively adjusts her age at menopause based on her postmarital residence. We find a significant effect only at the intermediate timescale where, contrary to predictions, ethnic groups with a history of multilocal postnuptial residence (where couples choose where to live) have the slowest progression to menopause, whereas matrilocal and patrilocal ethnic groups have similar progression rates. Multilocal residence may reduce intergenerational conflicts between women, thus influencing reproductive behaviour, but our results provide no support for the female-dispersal model of intergenerational conflict as an explanation of menopause.
Snopkowski, Kristin; Moya, Cristina; Sear, Rebecca
2014-01-01
Menopause remains an evolutionary puzzle, as humans are unique among primates in having a long post-fertile lifespan. One model proposes that intergenerational conflict in patrilocal populations favours female reproductive cessation. This model predicts that women should experience menopause earlier in groups with an evolutionary history of patrilocality compared with matrilocal groups. Using data from the Indonesia Family Life Survey, we test this model at multiple timescales: deep historical time, comparing age at menopause in ancestrally patrilocal Chinese Indonesians with ancestrally matrilocal Austronesian Indonesians; more recent historical time, comparing age at menopause in ethnic groups with differing postmarital residence within Indonesia and finally, analysing age at menopause at an individual-level, assuming a woman facultatively adjusts her age at menopause based on her postmarital residence. We find a significant effect only at the intermediate timescale where, contrary to predictions, ethnic groups with a history of multilocal postnuptial residence (where couples choose where to live) have the slowest progression to menopause, whereas matrilocal and patrilocal ethnic groups have similar progression rates. Multilocal residence may reduce intergenerational conflicts between women, thus influencing reproductive behaviour, but our results provide no support for the female-dispersal model of intergenerational conflict as an explanation of menopause. PMID:24966311
Toward a predictive model for elastomer seals
Molinari, Nicola; Khawaja, Musab; Sutton, Adrian; Mostofi, Arash
Nitrile butadiene rubber (NBR) and hydrogenated-NBR (HNBR) are widely used elastomers, especially as seals in oil and gas applications. During exposure to well-hole conditions, ingress of gases causes degradation of performance, including mechanical failure. We use computer simulations to investigate this problem at two different length and time-scales. First, we study the solubility of gases in the elastomer using a chemically-inspired description of HNBR based on the OPLS all-atom force-field. Starting with a model of NBR, C=C double bonds are saturated with either hydrogen or intramolecular cross-links, mimicking the hydrogenation of NBR to form HNBR. We validate against trends for the mass density and glass transition temperature for HNBR as a function of cross-link density, and for NBR as a function of the fraction of acrylonitrile in the copolymer. Second, we study mechanical behaviour using a coarse-grained model that overcomes some of the length and time-scale limitations of an all-atom approach. Nanoparticle fillers added to the elastomer matrix to enhance mechanical response are also included. Our initial focus is on understanding the mechanical properties at the elevated temperatures and pressures experienced in well-hole conditions.
Ding, Jinliang; Chai, Tianyou; Wang, Hong
2011-03-01
This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.
Using a Prediction Model to Manage Cyber Security Threats
Venkatesh Jaganathan
2015-01-01
Full Text Available Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.
Using a Prediction Model to Manage Cyber Security Threats.
Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya
2015-01-01
Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.
Active diagnosis of hybrid systems - A model predictive approach
Tabatabaeipour, Seyed Mojtaba; Ravn, Anders P.; Izadi-Zamanabadi, Roozbeh;
2009-01-01
A method for active diagnosis of hybrid systems is proposed. The main idea is to predict the future output of both normal and faulty model of the system; then at each time step an optimization problem is solved with the objective of maximizing the difference between the predicted normal and faulty...... outputs constrained by tolerable performance requirements. As in standard model predictive control, the first element of the optimal input is applied to the system and the whole procedure is repeated until the fault is detected by a passive diagnoser. It is demonstrated how the generated excitation signal...
Aero-acoustic noise of wind turbines. Noise prediction models
Maribo Pedersen, B. [ed.
1997-12-31
Semi-empirical and CAA (Computational AeroAcoustics) noise prediction techniques are the subject of this expert meeting. The meeting presents and discusses models and methods. The meeting may provide answers to the following questions: What Noise sources are the most important? How are the sources best modeled? What needs to be done to do better predictions? Does it boil down to correct prediction of the unsteady aerodynamics around the rotor? Or is the difficult part to convert the aerodynamics into acoustics? (LN)
Model Predictive Control of a Wave Energy Converter
Andersen, Palle; Pedersen, Tom Søndergård; Nielsen, Kirsten Mølgaard;
2015-01-01
In this paper reactive control and Model Predictive Control (MPC) for a Wave Energy Converter (WEC) are compared. The analysis is based on a WEC from Wave Star A/S designed as a point absorber. The model predictive controller uses wave models based on the dominating sea states combined with a model...... connecting undisturbed wave sequences to sequences of torque. Losses in the conversion from mechanical to electrical power are taken into account in two ways. Conventional reactive controllers are tuned for each sea state with the assumption that the converter has the same efficiency back and forth. MPC...
Modelling and prediction of non-stationary optical turbulence behaviour
Doelman, Niek; Osborn, James
2016-07-01
There is a strong need to model the temporal fluctuations in turbulence parameters, for instance for scheduling, simulation and prediction purposes. This paper aims at modelling the dynamic behaviour of the turbulence coherence length r0, utilising measurement data from the Stereo-SCIDAR instrument installed at the Isaac Newton Telescope at La Palma. Based on an estimate of the power spectral density function, a low order stochastic model to capture the temporal variability of r0 is proposed. The impact of this type of stochastic model on the prediction of the coherence length behaviour is shown.
Research on Drag Torque Prediction Model for the Wet Clutches
无
2008-01-01
Considering the surface tension effect and centrifugal effect, a mathematical model based on Reynolds equation for predicting the drag torque of disengage wet clutches is presented. The model indicates that the equivalent radius is a function of clutch speed and flow rate. The drag torque achieves its peak at a critical speed. Above this speed, drag torque drops due to the shrinking of the oil film. The model also points out that viscosity and flow rate effects on drag torque. Experimental results indicate that the model is reasonable and it performs well for predicting the drag torque peak.
Model output statistics applied to wind power prediction
Joensen, A.; Giebel, G.; Landberg, L. [Risoe National Lab., Roskilde (Denmark); Madsen, H.; Nielsen, H.A. [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)
1999-03-01
Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.
Development and application of chronic disease risk prediction models.
Oh, Sun Min; Stefani, Katherine M; Kim, Hyeon Chang
2014-07-01
Currently, non-communicable chronic diseases are a major cause of morbidity and mortality worldwide, and a large proportion of chronic diseases are preventable through risk factor management. However, the prevention efficacy at the individual level is not yet satisfactory. Chronic disease prediction models have been developed to assist physicians and individuals in clinical decision-making. A chronic disease prediction model assesses multiple risk factors together and estimates an absolute disease risk for the individual. Accurate prediction of an individual's future risk for a certain disease enables the comparison of benefits and risks of treatment, the costs of alternative prevention strategies, and selection of the most efficient strategy for the individual. A large number of chronic disease prediction models, especially targeting cardiovascular diseases and cancers, have been suggested, and some of them have been adopted in the clinical practice guidelines and recommendations of many countries. Although few chronic disease prediction tools have been suggested in the Korean population, their clinical utility is not as high as expected. This article reviews methodologies that are commonly used for developing and evaluating a chronic disease prediction model and discusses the current status of chronic disease prediction in Korea.
Modeling of Cancer Stem Cell State Transitions Predicts Therapeutic Response.
Mary E Sehl
Full Text Available Cancer stem cells (CSCs possess capacity to both self-renew and generate all cells within a tumor, and are thought to drive tumor recurrence. Targeting the stem cell niche to eradicate CSCs represents an important area of therapeutic development. The complex nature of many interacting elements of the stem cell niche, including both intracellular signals and microenvironmental growth factors and cytokines, creates a challenge in choosing which elements to target, alone or in combination. Stochastic stimulation techniques allow for the careful study of complex systems in biology and medicine and are ideal for the investigation of strategies aimed at CSC eradication. We present a mathematical model of the breast cancer stem cell (BCSC niche to predict population dynamics during carcinogenesis and in response to treatment. Using data from cell line and mouse xenograft experiments, we estimate rates of interconversion between mesenchymal and epithelial states in BCSCs and find that EMT/MET transitions occur frequently. We examine bulk tumor growth dynamics in response to alterations in the rate of symmetric self-renewal of BCSCs and find that small changes in BCSC behavior can give rise to the Gompertzian growth pattern observed in breast tumors. Finally, we examine stochastic reaction kinetic simulations in which elements of the breast cancer stem cell niche are inhibited individually and in combination. We find that slowing self-renewal and disrupting the positive feedback loop between IL-6, Stat3 activation, and NF-κB signaling by simultaneous inhibition of IL-6 and HER2 is the most effective combination to eliminate both mesenchymal and epithelial populations of BCSCs. Predictions from our model and simulations show excellent agreement with experimental data showing the efficacy of combined HER2 and Il-6 blockade in reducing BCSC populations. Our findings will be directly examined in a planned clinical trial of combined HER2 and IL-6 targeted
Evaluation of burst pressure prediction models for line pipes
Zhu, Xian-Kui, E-mail: zhux@battelle.org [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States); Leis, Brian N. [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States)
2012-01-15
Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487-492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: Black-Right-Pointing-Pointer This paper evaluates different burst pressure prediction models for line pipes. Black-Right-Pointing-Pointer The existing models are categorized into two major groups of Tresca and von Mises solutions. Black-Right-Pointing-Pointer Prediction quality of each model is assessed statistically using a large full-scale burst test database. Black-Right-Pointing-Pointer The Zhu-Leis solution is identified as the best predictive model.
Development of Interpretable Predictive Models for BPH and Prostate Cancer
Bermejo, Pablo; Vivo, Alicia; Tárraga, Pedro J; Rodríguez-Montes, JA
2015-01-01
BACKGROUND Traditional methods for deciding whether to recommend a patient for a prostate biopsy are based on cut-off levels of stand-alone markers such as prostate-specific antigen (PSA) or any of its derivatives. However, in the last decade we have seen the increasing use of predictive models that combine, in a non-linear manner, several predictives that are better able to predict prostate cancer (PC), but these fail to help the clinician to distinguish between PC and benign prostate hyperplasia (BPH) patients. We construct two new models that are capable of predicting both PC and BPH. METHODS An observational study was performed on 150 patients with PSA ≥3 ng/mL and age >50 years. We built a decision tree and a logistic regression model, validated with the leave-one-out methodology, in order to predict PC or BPH, or reject both. RESULTS Statistical dependence with PC and BPH was found for prostate volume (P-value < 0.001), PSA (P-value < 0.001), international prostate symptom score (IPSS; P-value < 0.001), digital rectal examination (DRE; P-value < 0.001), age (P-value < 0.002), antecedents (P-value < 0.006), and meat consumption (P-value < 0.08). The two predictive models that were constructed selected a subset of these, namely, volume, PSA, DRE, and IPSS, obtaining an area under the ROC curve (AUC) between 72% and 80% for both PC and BPH prediction. CONCLUSION PSA and volume together help to build predictive models that accurately distinguish among PC, BPH, and patients without any of these pathologies. Our decision tree and logistic regression models outperform the AUC obtained in the compared studies. Using these models as decision support, the number of unnecessary biopsies might be significantly reduced. PMID:25780348
Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things
Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik
2017-09-01
This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part I: Forward models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
Prediction of coastal processes, including waves, currents, and sediment transport, can be obtained from a variety of detailed geophysical-process models with many simulations showing significant skill. This capability supports a wide range of research and applied efforts that can benefit from accurate numerical predictions. However, the predictions are only as accurate as the data used to drive the models and, given the large temporal and spatial variability of the surf zone, inaccuracies in data are unavoidable such that useful predictions require corresponding estimates of uncertainty. We demonstrate how a Bayesian-network model can be used to provide accurate predictions of wave-height evolution in the surf zone given very sparse and/or inaccurate boundary-condition data. The approach is based on a formal treatment of a data-assimilation problem that takes advantage of significant reduction of the dimensionality of the model system. We demonstrate that predictions of a detailed geophysical model of the wave evolution are reproduced accurately using a Bayesian approach. In this surf-zone application, forward prediction skill was 83%, and uncertainties in the model inputs were accurately transferred to uncertainty in output variables. We also demonstrate that if modeling uncertainties were not conveyed to the Bayesian network (i.e., perfect data or model were assumed), then overly optimistic prediction uncertainties were computed. More consistent predictions and uncertainties were obtained by including model-parameter errors as a source of input uncertainty. Improved predictions (skill of 90%) were achieved because the Bayesian network simultaneously estimated optimal parameters while predicting wave heights.