Operational results from a physical power prediction model
Energy Technology Data Exchange (ETDEWEB)
Landberg, L [Risoe National Lab., Meteorology and Wind Energy Dept., Roskilde (Denmark)
1999-03-01
This paper will describe a prediction system which predicts the expected power output of a number of wind farms. The system is automatic and operates on-line. The paper will quantify the accuracy of the predictions and will also give examples of the performance for specific storm events. An actual implementation of the system will be described and the robustness demonstrated. (au) 11 refs.
Directory of Open Access Journals (Sweden)
Ruixian Fang
2016-09-01
Full Text Available This work uses the adjoint sensitivity model of the counter-flow cooling tower derived in the accompanying PART I to obtain the expressions and relative numerical rankings of the sensitivities, to all model parameters, of the following model responses: (i outlet air temperature; (ii outlet water temperature; (iii outlet water mass flow rate; and (iv air outlet relative humidity. These sensitivities are subsequently used within the “predictive modeling for coupled multi-physics systems” (PM_CMPS methodology to obtain explicit formulas for the predicted optimal nominal values for the model responses and parameters, along with reduced predicted standard deviations for the predicted model parameters and responses. These explicit formulas embody the assimilation of experimental data and the “calibration” of the model’s parameters. The results presented in this work demonstrate that the PM_CMPS methodology reduces the predicted standard deviations to values that are smaller than either the computed or the experimentally measured ones, even for responses (e.g., the outlet water flow rate for which no measurements are available. These improvements stem from the global characteristics of the PM_CMPS methodology, which combines all of the available information simultaneously in phase-space, as opposed to combining it sequentially, as in current data assimilation procedures.
Directory of Open Access Journals (Sweden)
Indra Djati Sidi
2017-12-01
Full Text Available The model error N has been introduced to denote the discrepancy between measured and predicted capacity of pile foundation. This model error is recognized as epistemic uncertainty in pile capacity prediction. The statistics of N have been evaluated based on data gathered from various sites and may be considered only as a eneral-error trend in capacity prediction, providing crude estimates of the model error in the absence of more specific data from the site. The results of even a single load test to failure, should provide direct evidence of the pile capacity at a given site. Bayes theorem has been used as a rational basis for combining new data with previous data to revise assessment of uncertainty and reliability. This study is devoted to the development of procedures for updating model error (N, and subsequently the predicted pile capacity with a results of single failure test.
New Results on Robust Model Predictive Control for Time-Delay Systems with Input Constraints
Directory of Open Access Journals (Sweden)
Qing Lu
2014-01-01
Full Text Available This paper investigates the problem of model predictive control for a class of nonlinear systems subject to state delays and input constraints. The time-varying delay is considered with both upper and lower bounds. A new model is proposed to approximate the delay. And the uncertainty is polytopic type. For the state-feedback MPC design objective, we formulate an optimization problem. Under model transformation, a new model predictive controller is designed such that the robust asymptotical stability of the closed-loop system can be guaranteed. Finally, the applicability of the presented results are demonstrated by a practical example.
A Risk Prediction Model for Sporadic CRC Based on Routine Lab Results.
Boursi, Ben; Mamtani, Ronac; Hwang, Wei-Ting; Haynes, Kevin; Yang, Yu-Xiao
2016-07-01
Current risk scores for colorectal cancer (CRC) are based on demographic and behavioral factors and have limited predictive values. To develop a novel risk prediction model for sporadic CRC using clinical and laboratory data in electronic medical records. We conducted a nested case-control study in a UK primary care database. Cases included those with a diagnostic code of CRC, aged 50-85. Each case was matched with four controls using incidence density sampling. CRC predictors were examined using univariate conditional logistic regression. Variables with p value CRC prediction models which included age, sex, height, obesity, ever smoking, alcohol dependence, and previous screening colonoscopy had an AUC of 0.58 (0.57-0.59) with poor goodness of fit. A laboratory-based model including hematocrit, MCV, lymphocytes, and neutrophil-lymphocyte ratio (NLR) had an AUC of 0.76 (0.76-0.77) and a McFadden's R2 of 0.21 with a NRI of 47.6 %. A combined model including sex, hemoglobin, MCV, white blood cells, platelets, NLR, and oral hypoglycemic use had an AUC of 0.80 (0.79-0.81) with a McFadden's R2 of 0.27 and a NRI of 60.7 %. Similar results were shown in an internal validation set. A laboratory-based risk model had good predictive power for sporadic CRC risk.
Field studies of submerged-diffuser thermal plumes with comparisons to predictive model results
International Nuclear Information System (INIS)
Frigo, A.A.; Paddock, R.A.; Ditmars, J.D.
1976-01-01
Thermal plumes from submerged discharges of cooling water from two power plants on Lake Michigan were studied. The system for the acquisition of water temperatures and ambient conditions permitted the three-dimensional structure of the plumes to be determined. The Zion Nuclear Power Station has two submerged discharge structures separated by only 94 m. Under conditions of flow from both structures, interaction between the two plumes resulted in larger thermal fields than would be predicted by the superposition of single non-interacting plumes. Maximum temperatures in the near-field region of the plume compared favorably with mathematical model predictions. A comparison of physical-model predictions for the plume at the D. C. Cook Nuclear Plant with prototype measurements indicated good agreement in the near-field region, but differences in the far-field occurred as similitude was not preserved there
Luebeck, E Georg; Moolgavkar, Suresh H; Liu, Amy Y; Boynton, Alanna; Ulrich, Cornelia M
2008-06-01
Folate is essential for nucleotide synthesis, DNA replication, and methyl group supply. Low-folate status has been associated with increased risks of several cancer types, suggesting a chemopreventive role of folate. However, recent findings on giving folic acid to patients with a history of colorectal polyps raise concerns about the efficacy and safety of folate supplementation and the long-term health effects of folate fortification. Results suggest that undetected precursor lesions may progress under folic acid supplementation, consistent with the role of folate role in nucleotide synthesis and cell proliferation. To better understand the possible trade-offs between the protective effects due to decreased mutation rates and possibly concomitant detrimental effects due to increased cell proliferation of folic acid, we used a biologically based mathematical model of colorectal carcinogenesis. We predict changes in cancer risk based on timing of treatment start and the potential effect of folic acid on cell proliferation and mutation rates. Changes in colorectal cancer risk in response to folic acid supplementation are likely a complex function of treatment start, duration, and effect on cell proliferation and mutations rates. Predicted colorectal cancer incidence rates under supplementation are mostly higher than rates without folic acid supplementation unless supplementation is initiated early in life (before age 20 years). To the extent to which this model predicts reality, it indicates that the effect on cancer risk when starting folic acid supplementation late in life is small, yet mostly detrimental. Experimental studies are needed to provide direct evidence for this dual role of folate in colorectal cancer and to validate and improve the model predictions.
CSIR Research Space (South Africa)
Scott, DF
1997-04-01
Full Text Available Mathematical models to predict runoff reductions due to afforestation are presented. The models are intended to aid decision-makers and planners who need to evaluate the water requirements of competing land uses at a district or regional scale. Five...
DEFF Research Database (Denmark)
Ahmad, Amais; Zachariasen, Camilla; Christiansen, Lasse Engbo
2016-01-01
generated by a mathematical model of the competitive growth of multiple strains of Escherichia coli.Results: Simulation studies showed that sequential use of tetracycline and ampicillin reduced the level of double resistance, when compared to the combination treatment. The effect of the cycling frequency...... frequency did not play a role in suppressing the growth of resistant strains, but the specific order of the two antimicrobials did. Predictions made from the study could be used to redesign multidrug treatment strategies not only for intramuscular treatment in pigs, but also for other dosing routes.......Background: Combination treatment is increasingly used to fight infections caused by bacteria resistant to two or more antimicrobials. While multiple studies have evaluated treatment strategies to minimize the emergence of resistant strains for single antimicrobial treatment, fewer studies have...
Christianen, Miranda E M C; Schilstra, Cornelis; Beetz, Ivo; Muijs, Christina T; Chouvalova, Olga; Burlage, Fred R; Doornaert, Patricia; Koken, Phil W; Leemans, C René; Rinkel, Rico N P M; de Bruijn, Marieke J; de Bock, G H; Roodenburg, Jan L N; van der Laan, Bernard F A M; Slotman, Ben J; Verdonck-de Leeuw, Irma M; Bijl, Hendrik P; Langendijk, Johannes A
2012-10-01
The purpose of this large multicentre prospective cohort study was to identify which dose volume histogram parameters and pre-treatment factors are most important to predict physician-rated and patient-rated radiation-induced swallowing dysfunction (RISD) in order to develop predictive models for RISD after curative (chemo) radiotherapy ((CH) RT). The study population consisted of 354 consecutive head and neck cancer patients treated with (CH) RT. The primary endpoint was grade 2 or more swallowing dysfunction according to the RTOG/EORTC late radiation morbidity scoring criteria at 6 months after (CH) RT. The secondary endpoints were patient-rated swallowing complaints as assessed with the EORTC QLQ-H&N35 questionnaire. To select the most predictive variables a multivariate logistic regression analysis with bootstrapping was used. At 6 months after (CH) RT the bootstrapping procedure revealed that a model based on the mean dose to the superior pharyngeal constrictor muscle (PCM) and mean dose to the supraglottic larynx was most predictive. For the secondary endpoints different predictive models were found: for problems with swallowing liquids the most predictive factors were the mean dose to the supraglottic larynx and radiation technique (3D-CRT versus IMRT). For problems with swallowing soft food the mean dose to the middle PCM, age (18-65 versus >65 years), tumour site (naso/oropharynx versus other sites) and radiation technique (3D-CRT versus IMRT) were the most predictive factors. For problems with swallowing solid food the most predictive factors were the mean dose to the superior PCM, the mean dose to the supraglottic larynx and age (18-65 versus >65 years). And for choking when swallowing the V60 of the oesophageal inlet muscle and the mean dose to the supraglottic larynx were the most predictive factors. Physician-rated and patient-rated RISD in head and neck cancer patients treated with (CH) RT cannot be predicted with univariate relationships between the
Bonne, F.; Alamir, M.; Bonnay, P.
2017-02-01
This paper deals with multivariable constrained model predictive control for Warm Compression Stations (WCS). WCSs are subject to numerous constraints (limits on pressures, actuators) that need to be satisfied using appropriate algorithms. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to achieve precise control of pressures in normal operation or to avoid reaching stopping criteria (such as excessive pressures) under high disturbances (such as a pulsed heat load expected to take place in future fusion reactors, expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details the simulator used to validate this new control scheme and the associated simulation results on the SBTs WCS. This work is partially supported through the French National Research Agency (ANR), task agreement ANR-13-SEED-0005.
Application of Physics Model in prediction of the Hellas National election results
Directory of Open Access Journals (Sweden)
L. Magafas
2009-01-01
Full Text Available In this paper we use a new scientific field called “DemoscopoPhysics” based on chaos theory to predict the Hellenic Na-tional election results in the form of time series for Hellenic political parties, New Democracy (ND, Panhellenic Socialistic Movement (PASOK, Hellenic Communistic Party (KKE, Coalition of the Radical Left (SYRIZA and (Popular Orthodox Rally LAOS”. Using the results of our previous article we reconstruct of the corresponding strange attractor for each political party achieved up to a 30 time steps out of sample
Wijk, van M.T.
2007-01-01
Ecosystem functioning is the result of processes working at a hierarchy of scales. The representation of these processes in a model that is mathematically tractable and ecologically meaningful is a big challenge. In this paper I describe an individual based model (PLACO¿PLAnt COmpetition) that
Application of Physics Model in prediction of the Hellas Euro election results
Directory of Open Access Journals (Sweden)
L. Magafas
2009-01-01
Full Text Available In this paper we use chaos theory to predict the Hellenic Euro election results in the form of time series for Hellenic political parties New Democracy (ND, Panhellenic Socialistic Movement (PASOK, Hellenic Communistic Party (KKE , Coalition of the Radical Left (SYRIZA and (Popular Orthodox Rally LAOS, using the properties of the reconstructed strange attrac-tor of the corresponding non linear system, creating a new scientific field called “DemoscopoPhysics”. For this purpose we found the optimal delay time, the correlation and embedding dimension with the method of Grassberger and Procassia. With the help of topological properties of the corresponding strange attractor we achieved up to a 60 time steps out of sample pre-diction of the public survey.
Söderström, Karin; Nilsson, Per; Laurell, Göran; Zackrisson, Björn; Jäghagen, Eva Levring
2017-02-01
To establish predictive models for late objective aspiration and late patient-reported choking based on dose-volume parameters and baseline patient and treatment characteristics, for patients with head and neck cancer undergoing definitive radiotherapy (RT). The impact of electively treated volume on late aspiration was also investigated. This prospective cohort is a subsample of 124 survivors from the ARTSCAN study. Late aspiration was identified with videofluoroscopy, at a minimum of 25months after the start of RT. Patient-reported choking was analysed at 12 and 60months post RT using the EORTC Quality of Life Module for Head and Neck Cancer 35. Univariable and multivariable analyses were performed to describe the association between clinical factors and dose-volume descriptors for organs at risk (OARs) and late dysphagia. Aspiration was found in 47% of the eligible patients. Mean dose to the middle pharyngeal constrictor (MPC), neck dissection post RT and age at randomisation in ARTSCAN were associated to late aspiration. Mean dose to the superior pharyngeal constrictor (SPC) and swallowing complaints at baseline were associated to patient reported choking at both time-points. Three separate risk groups for late aspiration, and two risk groups for late patient-reported choking were identified based on number of risk factors. The size of the electively treated volume could be used as a surrogate for individual OARs predicting late aspiration. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Failor, R.A.; Souers, P.C.; Magnotta, F.
1992-01-01
Ozone, predicted to be an important intermediate species in T 2 oxidation, was monitored in situ by UV absorption spectroscopy for 0.01-1.0 mol % T 2 in O 2 (1 atm, 298 K). These are the first measurements of a tritium oxidation reaction intermediate. The experimental results were compared with the predictions of the author's comprehensive model of tritium oxidation. The experimentally determined temporal variation in ozone concentration is qualitatively reproduced by the model. As predicted, the measured initial rate of ozone production varied linearly with the initial T 2 concentration ([T 2 ] o ), but with a value one-third of that predicted. The steady-state ozone concentration ([O 3 ] ss ) a factor of 4 larger than predicted for a 1.0% T 2 -O 2 mixture. Addition of H 2 to the T 2 O 2 mixture, to differentiate between the radiolytic and chemical behavior of the tritium, produced a decrease in [O 3 ] ss which was larger than predicted. Changing the reaction cell surface-to-volume ratio showed indications of minor surface removal of ozone. No reasonable variation in model input parameters brought both the predicted initial ozone production rates and steady-state concentrations of ozone into agreement with the experimental results. Though qualitative agreement was achieved, further studies, with emphasis on surface effects, are necessary to explain quantitative differences and gain a greater understanding of the oxidation mechanism. 27 refs., 11 figs., 4 tabs
Energy Technology Data Exchange (ETDEWEB)
Hwang, Won Tae; Jeong, Hyo Jun; Kim, Eun Han; Han, Moon Hee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2009-03-15
The METRO-K is a model for a radiological dose assessment due to a radioactive contamination in the Korean urban environment. The model has been taken part in the Urban Remediation Working Group within the IAEA's (International Atomic Energy Agency) EMRAS (Environmental Modeling for Radiation Safety) program. The Working Croup designed for the intercomparison of radioactive contamination to be resulted from the explosion of a radiological dispersal device in a hypothetical city. This paper dealt intensively with a part among a lot of predictive results which had been performed in the EMRAS program. The predictive results of three different models (METRO-K, RESRAD-RDD, CPHR) were submitted to the Working Group. The gap of predictive results was due to the difference of mathematical modeling approaches, parameter values, understanding of assessors. Even if final results (for example, dose rates from contaminated surfaces which might affect to a receptor) are similar, the understanding on the contribution of contaminated surfaces showed a great difference. Judging from the authors, it is due to the lack of understanding and information on radioactive terrors as well as the social and cultural gaps which assessors have been experienced. Therefore, it can be known that the experience of assessors and their subjective judgements might be important factors to get reliable results. If the acquisition of a little additional information is possible, it was identified that the METRO-K might be a useful tool for decision support against contamination resulting from radioactive terrors by improving the existing model.
International Nuclear Information System (INIS)
Hwang, Won Tae; Jeong, Hyo Jun; Kim, Eun Han; Han, Moon Hee
2009-01-01
The METRO-K is a model for a radiological dose assessment due to a radioactive contamination in the Korean urban environment. The model has been taken part in the Urban Remediation Working Group within the IAEA's (International Atomic Energy Agency) EMRAS (Environmental Modeling for Radiation Safety) program. The Working Croup designed for the intercomparison of radioactive contamination to be resulted from the explosion of a radiological dispersal device in a hypothetical city. This paper dealt intensively with a part among a lot of predictive results which had been performed in the EMRAS program. The predictive results of three different models (METRO-K, RESRAD-RDD, CPHR) were submitted to the Working Group. The gap of predictive results was due to the difference of mathematical modeling approaches, parameter values, understanding of assessors. Even if final results (for example, dose rates from contaminated surfaces which might affect to a receptor) are similar, the understanding on the contribution of contaminated surfaces showed a great difference. Judging from the authors, it is due to the lack of understanding and information on radioactive terrors as well as the social and cultural gaps which assessors have been experienced. Therefore, it can be known that the experience of assessors and their subjective judgements might be important factors to get reliable results. If the acquisition of a little additional information is possible, it was identified that the METRO-K might be a useful tool for decision support against contamination resulting from radioactive terrors by improving the existing model.
Parameter uncertainty and model predictions: a review of Monte Carlo results
International Nuclear Information System (INIS)
Gardner, R.H.; O'Neill, R.V.
1979-01-01
Studies of parameter variability by Monte Carlo analysis are reviewed using repeated simulations of the model with randomly selected parameter values. At the beginning of each simulation, parameter values are chosen from specific frequency distributions. This process is continued for a number of iterations sufficient to converge on an estimate of the frequency distribution of the output variables. The purpose was to explore the general properties of error propagaton in models. Testing the implicit assumptions of analytical methods and pointing out counter-intuitive results produced by the Monte Carlo approach are additional points covered
Vertical Instability in EAST: Comparison of Model Predictions with Experimental Results
International Nuclear Information System (INIS)
Qian Jinping; Wan Baonian; Shen Biao; Xiao Bingjia; Sun Youwen; Shi Yuejiang; Lin Shiyao; Li Jiangang; Gong Xianzu
2008-01-01
Growth rates of the axisymmetric mode in elongated plasmas in the experimental advanced superconducting tokamak (EAST) are measured with zero feedback gains and then compared with numerically calculated growth rates for the reconstructed shapes. The comparison is made after loss of vertical position control. The open-loop growth rates were scanned with the number of vessel eigenmodes, which up to 20 is enough to make the growth rates settled. The agreement between the growth rates measured experimentally and the growth rates determined numerically is good. The results show that a linear RZIP model is essentially good enough for the vertical position feedback control.
DEFF Research Database (Denmark)
Victor, L.; Troch, P.; Kofoed, Jens Peter
2009-01-01
For overtopping wave energy converters (WECs) a more efficient energy conversion can be achieved when the volumes of water, wave by wave, that enter their reservoir are known and can be predicted. A numerical tool is being developed using a commercial CFD-solver to study and optimize...... nearshore 2Dstructure. First numerical model results are given for a specific test with regular waves, and are compared with the corresponding experimental results in this paper....
Ali, Mehreen; Khan, Suleiman A; Wennerberg, Krister; Aittokallio, Tero
2018-04-15
Proteomics profiling is increasingly being used for molecular stratification of cancer patients and cell-line panels. However, systematic assessment of the predictive power of large-scale proteomic technologies across various drug classes and cancer types is currently lacking. To that end, we carried out the first pan-cancer, multi-omics comparative analysis of the relative performance of two proteomic technologies, targeted reverse phase protein array (RPPA) and global mass spectrometry (MS), in terms of their accuracy for predicting the sensitivity of cancer cells to both cytotoxic chemotherapeutics and molecularly targeted anticancer compounds. Our results in two cell-line panels demonstrate how MS profiling improves drug response predictions beyond that of the RPPA or the other omics profiles when used alone. However, frequent missing MS data values complicate its use in predictive modeling and required additional filtering, such as focusing on completely measured or known oncoproteins, to obtain maximal predictive performance. Rather strikingly, the two proteomics profiles provided complementary predictive signal both for the cytotoxic and targeted compounds. Further, information about the cellular-abundance of primary target proteins was found critical for predicting the response of targeted compounds, although the non-target features also contributed significantly to the predictive power. The clinical relevance of the selected protein markers was confirmed in cancer patient data. These results provide novel insights into the relative performance and optimal use of the widely applied proteomic technologies, MS and RPPA, which should prove useful in translational applications, such as defining the best combination of omics technologies and marker panels for understanding and predicting drug sensitivities in cancer patients. Processed datasets, R as well as Matlab implementations of the methods are available at https://github.com/mehr-een/bemkl-rbps. mehreen
Maljaars, E.; Felici, F.; Blanken, T. C.; Galperti, C.; Sauter, O.; de Baar, M. R.; Carpanese, F.; Goodman, T. P.; Kim, D.; Kim, S. H.; Kong, M.; Mavkov, B.; Merle, A.; Moret, J. M.; Nouailletas, R.; Scheffer, M.; Teplukhina, A. A.; Vu, N. M. T.; The EUROfusion MST1-team; The TCV-team
2017-12-01
The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety factor profile (q-profile) and kinetic plasma parameters such as the plasma beta. This demands to establish reliable profile control routines in presently operational tokamaks. We present a model predictive profile controller that controls the q-profile and plasma beta using power requests to two clusters of gyrotrons and the plasma current request. The performance of the controller is analyzed in both simulation and TCV L-mode discharges where successful tracking of the estimated inverse q-profile as well as plasma beta is demonstrated under uncertain plasma conditions and the presence of disturbances. The controller exploits the knowledge of the time-varying actuator limits in the actuator input calculation itself such that fast transitions between targets are achieved without overshoot. A software environment is employed to prepare and test this and three other profile controllers in parallel in simulations and experiments on TCV. This set of tools includes the rapid plasma transport simulator RAPTOR and various algorithms to reconstruct the plasma equilibrium and plasma profiles by merging the available measurements with model-based predictions. In this work the estimated q-profile is merely based on RAPTOR model predictions due to the absence of internal current density measurements in TCV. These results encourage to further exploit model predictive profile control in experiments on TCV and other (future) tokamaks.
DEFF Research Database (Denmark)
Ahmad, Amais; Zachariasen, Camilla; Christiansen, Lasse Engbo
2016-01-01
Background: Combination treatment is increasingly used to fight infections caused by bacteria resistant to two or more antimicrobials. While multiple studies have evaluated treatment strategies to minimize the emergence of resistant strains for single antimicrobial treatment, fewer studies have...... the sensitive fraction of the commensal flora.Growth parameters for competing bacterial strains were estimated from the combined in vitro pharmacodynamic effect of two antimicrobials using the relationship between concentration and net bacterial growth rate. Predictions of in vivo bacterial growth were...... (how frequently antibiotics are alternated in a sequential treatment) of the two drugs was dependent upon the order in which the two drugs were used.Conclusion: Sequential treatment was more effective in preventing the growth of resistant strains when compared to the combination treatment. The cycling...
Directory of Open Access Journals (Sweden)
Alessandro OGGIONI
2006-02-01
Full Text Available This study reports the first preliminary results of the DYRESM-CAEDYM model application to a mid size sub-alpine lake (Lake Pusiano North Italy. The in-lake modelling is a part of a more general project called Pusiano Integrated Lake/Catchment project (PILE whose final goal is to understand the hydrological and trophic relationship between lake and catchment, supporting the restoration plan of the lake through field data analysis and numerical models. DYRESM is a 1D-3D hydrodynamics model for predicting the vertical profile of temperature, salinity and density. CAEDYM is multi-component ecological model, used here as a phytoplankton-zooplankton processes based model, which includes algorithms to simulate the nutrient cycles within the water column as well as the air-water gas exchanges and the water-sediments fluxes. The first results of the hydrodynamics simulations underline the capability of the model to accurately simulate the surface temperature seasonal trend and the thermal gradient whereas, during summer stratification, the model underestimates the bottom temperature of around 2 °C. The ecological model describes the epilimnetic reactive phosphorus (PO4 depletion (due to the phytoplankton uptake and the increase in PO4 concentrations in the deepest layers of the lake (due to the mineralization processes and the sediments release. In terms of phytoplankton dynamics the model accounts for the Planktothrix rubescens dominance during the whole season, whereas it seems to underestimate the peak in primary production related to both the simulated algal groups (P. rubescens and the rest of the other species aggregated in a single class. The future aims of the project are to complete the model parameterization and to connect the in-lake and the catchment modelling in order to gain an integrated view of the lake-catchment ecosystem as well as to develop a three dimensional model of the lake.
Luebeck, EG; Moolgavkar, SH; Liu, AY; Boynton, A; Ulrich, CM
2008-01-01
Folate is essential for nucleotide synthesis, DNA-replication and methyl-group supply. Low-folate status has been associated with increased risks of several cancer types, suggesting a chemopreventive role of folate. However, recent findings on giving folic acid (FA) to patients with a history of colorectal polyps raise concerns about the efficacy and safety of folate supplementation and the long-term health effects of folate fortification. Results suggest that undetected precursor lesions may...
International Nuclear Information System (INIS)
Cassardo, C.; Loglisci, N.
2005-01-01
In the recent years, there has been a significant growth in the recognition of the soil moisture importance in large-scale hydrology and climate modelling. Soil moisture is a lower boundary condition, which rules the partitioning of energy in terms of sensible and latent heat flux. Wrong estimations of soil moisture lead to wrong simulation of the surface layer evolution and hence precipitations and cloud cover forecasts could be consequently affected. This is true for large scale medium-range weather forecasts as well as for local-scale short range weather forecasts, particularly in those situations in which local convection is well developed. Unfortunately; despite the importance of this physical parameter there are only few soil moisture data sets sparse in time and in space around in the world. Due to this scarcity of soil moisture observations, we developed an alternative method to provide soil moisture datasets in order to verify numerical weather prediction models. In this paper are presented the preliminary results of an attempt to verify soil moisture fields predicted by a mesoscale model. The data for the comparison were provided by the simulations of the diagnostic land surface scheme LSPM (Land Surface Process Model), widely used at the Piedmont Regional Weather Service for agro-meteorological purposes. To this end, LSPM was initialized and driven by Synop observations, while the surface (vegetation and soil) parameter values were initialized by ECOCLIMAP global dataset at 1km 2 resolution
Directory of Open Access Journals (Sweden)
Leo Oey
2013-01-01
Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of ¡§(i¡¨ are also given.
Inverse and Predictive Modeling
Energy Technology Data Exchange (ETDEWEB)
Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-09-27
The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.
International Nuclear Information System (INIS)
Fenner, Kathrin; Scheringer, Martin; Hungerbuehler, Konrad
2004-01-01
The hazard indicators persistence (P) and long-range transport potential (LRTP) are used in chemicals assessment to characterize chemicals with regard to the temporal and spatial extent of their environmental exposure. They are often calculated based on the results of multimedia fate models. The environmental and substance-specific input parameters of such models are subject to a range of methodological uncertainties and also influenced by natural variability. We employed probabilistic uncertainty analysis to quantify variance in P and LRTP predictions for chemicals with different partitioning and transport behavior. Variance found in the results is so large that it prevents a clear distinction between chemicals. Additionally, only small improvements are observed when evaluating the results relative to a benchmark chemical. This can be explained by the dominance of substance-specific parameters and the only small direct influence of environmental parameters on P and LRTP as model outcomes. The findings underline the importance of learning how environmental conditions cause variability in substance behavior for improved substance ranking and classification. - Environmental conditions cause variability in substance behavior which need to be considered in chemical ranking schemes
D.R. Weise; E. Koo; X. Zhou; S. Mahalingam
2011-01-01
Observed fire spread rates from 240 laboratory fires in horizontally-oriented single-species live fuel beds were compared to predictions from various implementations and modifications of the Rothermel rate of spread model and a physical fire spread model developed by Pagni and Koo. Packing ratio of the laboratory fuel beds was generally greater than that observed in...
Mocellin, Simone; Thompson, John F; Pasquali, Sandro; Montesco, Maria C; Pilati, Pierluigi; Nitti, Donato; Saw, Robyn P; Scolyer, Richard A; Stretch, Jonathan R; Rossi, Carlo R
2009-12-01
To improve selection for sentinel node (SN) biopsy (SNB) in patients with cutaneous melanoma using statistical models predicting SN status. About 80% of patients currently undergoing SNB are node negative. In the absence of conclusive evidence of a SNBassociated survival benefit, these patients may be over-treated. Here, we tested the efficiency of 4 different models in predicting SN status. The clinicopathologic data (age, gender, tumor thickness, Clark level, regression, ulceration, histologic subtype, and mitotic index) of 1132 melanoma patients who had undergone SNB at institutions in Italy and Australia were analyzed. Logistic regression, classification tree, random forest, and support vector machine models were fitted to the data. The predictive models were built with the aim of maximizing the negative predictive value (NPV) and reducing the rate of SNB procedures though minimizing the error rate. After cross-validation logistic regression, classification tree, random forest, and support vector machine predictive models obtained clinically relevant NPV (93.6%, 94.0%, 97.1%, and 93.0%, respectively), SNB reduction (27.5%, 29.8%, 18.2%, and 30.1%, respectively), and error rates (1.8%, 1.8%, 0.5%, and 2.1%, respectively). Using commonly available clinicopathologic variables, predictive models can preoperatively identify a proportion of patients ( approximately 25%) who might be spared SNB, with an acceptable (1%-2%) error. If validated in large prospective series, these models might be implemented in the clinical setting for improved patient selection, which ultimately would lead to better quality of life for patients and optimization of resource allocation for the health care system.
The effect of ocean tides on the earth's rotation as predicted by the results of an ocean tide model
Gross, Richard S.
1993-01-01
The published ocean tidal angular momentum results of Seiler (1991) are used to predict the effects of the most important semidiurnal, diurnal, and long period ocean tides on the earth's rotation. The separate, as well as combined, effects of ocean tidal currents and sea level height changes on the length-of-day, UT1, and polar motion are computed. The predicted polar motion results reported here account for the presence of the free core nutation and are given in terms of the motion of the celestial ephemeris pole so that they can be compared directly to the results of observations. Outside the retrograde diurnal tidal band, the summed effect of the semidiurnal and diurnal ocean tides studied here predict peak-to-peak polar motion amplitudes as large as 2 mas. Within the retrograde diurnal tidal band, the resonant enhancement caused by the free core nutation leads to predicted polar motion amplitudes as large as 9 mas.
Maljaars, E.; Felici, F.; Blanken, T.C.; Galperti, C.; Sauter, O.; de Baar, M.R.; Carpanese, F.; Goodman, T.P.; Kim, D.; Kim, S.H.; Kong, M.G.; Mavkov, B.; Merle, A.; Moret, J.M.; Nouailletas, R.; Scheffer, M.; Teplukhina, A.A.; Vu, N.M.T.
2017-01-01
The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety
Maljaars, B.; Felici, F.; Blanken, T. C.; Galperti, C.; Sauter, O.; de Baar, M. R.; Carpanese, F.; Goodman, T. P.; Kim, D.; Kim, S. H.; Kong, M.; Mavkov, B.; Merle, A.; Moret, J.; Nouailletas, R.; Scheffer, M.; Teplukhina, A.; Vu, T.
2017-01-01
The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety
Shen, B.; Tao, W.; Atlas, R.
2008-12-01
Very Severe Cyclonic Storm Nargis, the deadliest named tropical cyclone (TC) in the North Indian Ocean Basin, devastated Burma (Myanmar) in May 2008, causing tremendous damage and numerous fatalities. An increased lead time in the prediction of TC Nargis would have increased the warning time and may therefore have saved lives and reduced economic damage. Recent advances in high-resolution global models and supercomputers have shown the potential for improving TC track and intensity forecasts, presumably by improving multi-scale simulations. The key but challenging questions to be answered include: (1) if and how realistic, in terms of timing, location and TC general structure, the global mesoscale model (GMM) can simulate TC genesis and (2) under what conditions can the model extend the lead time of TC genesis forecasts. In this study, we focus on genesis prediction for TCs in the Indian Ocean with the GMM. Preliminary real-data simulations show that the initial formation and intensity variations of TC Nargis can be realistically predicted at a lead time of up to 5 days. These simulations also suggest that the accurate representations of a westerly wind burst (WWB) and an equatorial trough, associated with monsoon circulations and/or a Madden-Julian Oscillation (MJO), are important for predicting the formation of this kind of TC. In addition to the WWB and equatorial trough, other favorable environmental conditions will be examined, which include enhanced monsoonal circulation, upper-level outflow, low- and middle-level moistening, and surface fluxes.
Predicting results of daily-practice cystoscopies.
García-Velandria, F; Sánchez-García, J F; Rodríguez-Toves, L A; Alvarez-Buitrago, L; Conde-Redondo, C; Rodríguez-Tesedo, V; Amón-Sesmero, J H; Cepeda-Delgado, M; Cobos-Carbó, A; Alonso-Fernández, D; Martínez-Sagarra, J M
2014-10-01
Our objective was to elaborate a predictive model of bladder cancer, in an unselected clinical population submitted to cystoscopy. We recruited consecutive patients that underwent cystoscopy due to suspicion of bladder cancer or surveillance of a previously diagnosed bladder cancer. Urine cytology and a BTA-stat® (BTA) test were carried out for all patients. To avoid an assessment bias, the BTA-tests, cytologies and cystoscopies were conducted in a blinded fashion. We used logistic regression to predict cystoscopy results from cytology, BTA-test and clinical variables. From August 2011 to July 2012, we recruited 244 patients and 237 were valid for analysis. Newly diagnosed and surveillance cases were 13% and 87% respectively. Cytology and BTA-test sensitivities were 57.9% (CI 95: 42.2-72.1) and 63.2% (CI 95: 47.3-76.6) with specificities of 84.4% (CI 95: 78.7-88.8) and 82.9% (CI 95: 77.1-87.5). The predictive model included the BTA-test, cytology, time since previous tumour, and treatment with mitomicin or BGC during the last three months. The model predictive accuracy (AUC) was .85 (.78-.92), and dropped to 0.79 when excluding the BTA-test (P=.026). For the surveillance of bladder cancer, a 10% threshold on the model predicted probabilities resulted in an overall negative predictive value of 95.7%, and 95.0% in low grade tumours. In a cost containment environment, our prediction model could be used to space out cystoscopies in patients with previous, low grade tumours, resulting in a more efficient use of resources in the healthcare system. Copyright © 2013 AEU. Published by Elsevier Espana. All rights reserved.
Predictive modeling of complications.
Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P
2016-09-01
Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.
Archaeological predictive model set.
2015-03-01
This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...
Predictive Surface Complexation Modeling
Energy Technology Data Exchange (ETDEWEB)
Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences
2016-11-29
Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO_{2} and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.
Levy, R.; Mcginness, H.
1976-01-01
Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.
International Nuclear Information System (INIS)
Sandborg, M.; McVey, G.; Dance, D.R.; Carlsson, G.A.
2000-01-01
The ability to predict image quality from known physical and technical parameters is a prerequisite for making successful dose optimisation. In this study, imaging systems have been simulated using a Monte Carlo model of the imaging systems. The model includes a voxelised human anatomy and quantifies image quality in terms of contrast and signal-to-noise ratio for 5-6 anatomical details included in the anatomy. The imaging systems used in clinical trials were simulated and the ranking of the systems by the model and radiologists compared. The model and the results of the trial for chest PA both show that using a high maximum optical density was significantly better than using a low one. The model predicts that a good system is characterised by a large dynamic range and a high contrast of the blood vessels in the retrocardiac area. The ranking by the radiologists and the model agreed for the lumbar spine AP. (author)
Modelling bankruptcy prediction models in Slovak companies
Directory of Open Access Journals (Sweden)
Kovacova Maria
2017-01-01
Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.
Predictive Modeling in Race Walking
Directory of Open Access Journals (Sweden)
Krzysztof Wiktorowicz
2015-01-01
Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.
Hijazi, Heba H; Harvey, Heather L; Alyahya, Mohammad S; Alshraideh, Hussam A; Al Abdi, Rabah M; Parahoo, Sanjai K
2018-01-01
Targeting the patient's needs and preferences has become an important contributor for improving care delivery, enhancing patient satisfaction, and achieving better clinical outcomes. This study aimed to examine the impact of applying quality management practices on patient centeredness within the context of health care accreditation and to explore the differences in the views of various health care workers regarding the attributes affecting patient-centered care. Our study followed a cross-sectional survey design wherein 4 Jordanian public hospitals were investigated several months after accreditation was obtained. Total 829 clinical/nonclinical hospital staff members consented for study participation. This sample was divided into 3 main occupational categories to represent the administrators, nurses, as well as doctors and other health professionals. Using a structural equation modeling, our results indicated that the predictors of patient-centered care for both administrators and those providing clinical care were participation in the accreditation process, leadership commitment to quality improvement, and measurement of quality improvement outcomes. In particular, perceiving the importance of the hospital's engagement in the accreditation process was shown to be relevant to the administrators (gamma = 0.96), nurses (gamma = 0.80), as well as to doctors and other health professionals (gamma = 0.71). However, the administrator staff (gamma = 0.31) was less likely to perceive the influence of measuring the quality improvement outcomes on the delivery of patient-centered care than nurses (gamma = 0.59) as well as doctors and other health care providers (gamma = 0.55). From the nurses' perspectives only, patient centeredness was found to be driven by building an institutional framework that supports quality assurance in hospital settings (gamma = 0.36). In conclusion, accreditation is a leading factor for delivering patient-centered care and should be on a hospital
Kabeshova, Anastasiia; Launay, Cyrille P; Gromov, Vasilii A; Fantino, Bruno; Levinoff, Elise J; Allali, Gilles; Beauchet, Olivier
2016-01-01
To compare performance criteria (i.e., sensitivity, specificity, positive predictive value, negative predictive value, area under receiver operating characteristic curve and accuracy) of linear and non-linear statistical models for fall risk in older community-dwellers. Participants were recruited in two large population-based studies, "Prévention des Chutes, Réseau 4" (PCR4, n=1760, cross-sectional design, retrospective collection of falls) and "Prévention des Chutes Personnes Agées" (PCPA, n=1765, cohort design, prospective collection of falls). Six linear statistical models (i.e., logistic regression, discriminant analysis, Bayes network algorithm, decision tree, random forest, boosted trees), three non-linear statistical models corresponding to artificial neural networks (multilayer perceptron, genetic algorithm and neuroevolution of augmenting topologies [NEAT]) and the adaptive neuro fuzzy interference system (ANFIS) were used. Falls ≥1 characterizing fallers and falls ≥2 characterizing recurrent fallers were used as outcomes. Data of studies were analyzed separately and together. NEAT and ANFIS had better performance criteria compared to other models. The highest performance criteria were reported with NEAT when using PCR4 database and falls ≥1, and with both NEAT and ANFIS when pooling data together and using falls ≥2. However, sensitivity and specificity were unbalanced. Sensitivity was higher than specificity when identifying fallers, whereas the converse was found when predicting recurrent fallers. Our results showed that NEAT and ANFIS were non-linear statistical models with the best performance criteria for the prediction of falls but their sensitivity and specificity were unbalanced, underscoring that models should be used respectively for the screening of fallers and the diagnosis of recurrent fallers. Copyright © 2015 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Bachet, Martin; Schneider, Hélène; Jauberty, Loïc; Windt, Laurent De; Dieuleveult, Caroline de; Tevissen, Etienne
2014-01-01
Experiments performed under chemical and flow conditions representative of pressurized water reactors (PWR) primary fluid purification by ion exchange resins (Amberlite IRN9882) are modeled with the OPTIPUR code, considering 1D reactive transport in the mixed-bed column with convective/dispersive transport between beads and electro-diffusive transport within the boundary film around the beads. The effectiveness of the purification in these dilute conditions is highly related to film mass transfer restrictions, which are accounted for by adjustment of a common mass transfer coefficient (MTC) on the experimental initial leakage or modeling of species diffusion through the bead film by the Nernst-Planck equation. A detailed analysis of the modeling against experimental data shows that the Nernst-Planck approach with no adjustable parameters performs as well as, or better, than the MTC approach, particularly to simulate the chromatographic elution of silver by nickel and the subsequent enrichment of the solution in the former metal. (author)
Predictions of High Energy Experimental Results
Directory of Open Access Journals (Sweden)
Comay E.
2010-10-01
Full Text Available Eight predictions of high energy experimental results are presented. The predictions contain the $Sigma ^+$ charge radius and results of two kinds of experiments using energetic pionic beams. In addition, predictions of the failure to find the following objects are presented: glueballs, pentaquarks, Strange Quark Matter, magnetic monopoles searched by their direct interaction with charges and the Higgs boson. The first seven predictions rely on the Regular Charge-Monopole Theory and the last one relies on mathematical inconsistencies of the Higgs Lagrangian density.
Cultural Resource Predictive Modeling
2017-10-01
CR cultural resource CRM cultural resource management CRPM Cultural Resource Predictive Modeling DoD Department of Defense ESTCP Environmental...resource management ( CRM ) legal obligations under NEPA and the NHPA, military installations need to demonstrate that CRM decisions are based on objective...maxim “one size does not fit all,” and demonstrate that DoD installations have many different CRM needs that can and should be met through a variety
Candidate Prediction Models and Methods
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik
2005-01-01
This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....
Norman, L.M.; Guertin, D.P.; Feller, M.
2008-01-01
The development of new approaches for understanding processes of urban development and their environmental effects, as well as strategies for sustainable management, is essential in expanding metropolitan areas. This study illustrates the potential of linking urban growth and watershed models to identify problem areas and support long-term watershed planning. Sediment is a primary source of nonpoint-source pollution in surface waters. In urban areas, sediment is intermingled with other surface debris in transport. In an effort to forecast the effects of development on surface-water quality, changes predicted in urban areas by the SLEUTH urban growth model were applied in the context of erosion-sedimentation models (Universal Soil Loss Equation and Spatially Explicit Delivery Models). The models are used to simulate the effect of excluding hot-spot areas of erosion and sedimentation from future urban growth and to predict the impacts of alternative erosion-control scenarios. Ambos Nogales, meaning 'both Nogaleses,' is a name commonly used for the twin border cities of Nogales, Arizona and Nogales, Sonora, Mexico. The Ambos Nogales watershed has experienced a decrease in water quality as a result of urban development in the twin-city area. Population growth rates in Ambos Nogales are high and the resources set in place to accommodate the rapid population influx will soon become overburdened. Because of its remote location and binational governance, monitoring and planning across the border is compromised. One scenario described in this research portrays an improvement in water quality through the identification of high-risk areas using models that simulate their protection from development and replanting with native grasses, while permitting the predicted and inevitable growth elsewhere. This is meant to add to the body of knowledge about forecasting the impact potential of urbanization on sediment delivery to streams for sustainable development, which can be
Atmospheric Deposition Modeling Results
U.S. Environmental Protection Agency — This asset provides data on model results for dry and total deposition of sulfur, nitrogen and base cation species. Components include deposition velocities, dry...
Thakar, Sumit; Sivaraju, Laxminadh; Jacob, Kuruthukulangara S; Arun, Aditya Atal; Aryan, Saritha; Mohan, Dilip; Sai Kiran, Narayanam Anantha; Hegde, Alangar S
2018-01-01
OBJECTIVE Although various predictors of postoperative outcome have been previously identified in patients with Chiari malformation Type I (CMI) with syringomyelia, there is no known algorithm for predicting a multifactorial outcome measure in this widely studied disorder. Using one of the largest preoperative variable arrays used so far in CMI research, the authors attempted to generate a formula for predicting postoperative outcome. METHODS Data from the clinical records of 82 symptomatic adult patients with CMI and altered hindbrain CSF flow who were managed with foramen magnum decompression, C-1 laminectomy, and duraplasty over an 8-year period were collected and analyzed. Various preoperative clinical and radiological variables in the 57 patients who formed the study cohort were assessed in a bivariate analysis to determine their ability to predict clinical outcome (as measured on the Chicago Chiari Outcome Scale [CCOS]) and the resolution of syrinx at the last follow-up. The variables that were significant in the bivariate analysis were further analyzed in a multiple linear regression analysis. Different regression models were tested, and the model with the best prediction of CCOS was identified and internally validated in a subcohort of 25 patients. RESULTS There was no correlation between CCOS score and syrinx resolution (p = 0.24) at a mean ± SD follow-up of 40.29 ± 10.36 months. Multiple linear regression analysis revealed that the presence of gait instability, obex position, and the M-line-fourth ventricle vertex (FVV) distance correlated with CCOS score, while the presence of motor deficits was associated with poor syrinx resolution (p ≤ 0.05). The algorithm generated from the regression model demonstrated good diagnostic accuracy (area under curve 0.81), with a score of more than 128 points demonstrating 100% specificity for clinical improvement (CCOS score of 11 or greater). The model had excellent reliability (κ = 0.85) and was validated with
MODEL PREDICTIVE CONTROL FUNDAMENTALS
African Journals Online (AJOL)
2012-07-02
Jul 2, 2012 ... signal based on a process model, coping with constraints on inputs and ... paper, we will present an introduction to the theory and application of MPC with Matlab codes ... section 5 presents the simulation results and section 6.
Luo, Gang
2016-01-01
Predictive modeling is a key component of solutions to many healthcare problems. Among all predictive modeling approaches, machine learning methods often achieve the highest prediction accuracy, but suffer from a long-standing open problem precluding their widespread use in healthcare. Most machine learning models give no explanation for their prediction results, whereas interpretability is essential for a predictive model to be adopted in typical healthcare settings. This paper presents the first complete method for automatically explaining results for any machine learning predictive model without degrading accuracy. We did a computer coding implementation of the method. Using the electronic medical record data set from the Practice Fusion diabetes classification competition containing patient records from all 50 states in the United States, we demonstrated the method on predicting type 2 diabetes diagnosis within the next year. For the champion machine learning model of the competition, our method explained prediction results for 87.4 % of patients who were correctly predicted by the model to have type 2 diabetes diagnosis within the next year. Our demonstration showed the feasibility of automatically explaining results for any machine learning predictive model without degrading accuracy.
Keye, Stefan; Togiti, Vamish; Eisfeld, Bernhard; Brodersen, Olaf P.; Rivers, Melissa B.
2013-01-01
The accurate calculation of aerodynamic forces and moments is of significant importance during the design phase of an aircraft. Reynolds-averaged Navier-Stokes (RANS) based Computational Fluid Dynamics (CFD) has been strongly developed over the last two decades regarding robustness, efficiency, and capabilities for aerodynamically complex configurations. Incremental aerodynamic coefficients of different designs can be calculated with an acceptable reliability at the cruise design point of transonic aircraft for non-separated flows. But regarding absolute values as well as increments at off-design significant challenges still exist to compute aerodynamic data and the underlying flow physics with the accuracy required. In addition to drag, pitching moments are difficult to predict because small deviations of the pressure distributions, e.g. due to neglecting wing bending and twisting caused by the aerodynamic loads can result in large discrepancies compared to experimental data. Flow separations that start to develop at off-design conditions, e.g. in corner-flows, at trailing edges, or shock induced, can have a strong impact on the predictions of aerodynamic coefficients too. Based on these challenges faced by the CFD community a working group of the AIAA Applied Aerodynamics Technical Committee initiated in 2001 the CFD Drag Prediction Workshop (DPW) series resulting in five international workshops. The results of the participants and the committee are summarized in more than 120 papers. The latest, fifth workshop took place in June 2012 in conjunction with the 30th AIAA Applied Aerodynamics Conference. The results in this paper will evaluate the influence of static aeroelastic wing deformations onto pressure distributions and overall aerodynamic coefficients based on the NASA finite element structural model and the common grids.
Predicting Collision Damage and Resulting Consequences
DEFF Research Database (Denmark)
Ravn, Erik Sonne; Friis-Hansen, Peter
2004-01-01
This paper presents an Artificial Neutral Network (ANN)that is trained to predict the structural damage in the shipside resulting from ship-ship collisions. The input to the ANN is the absorbed energy, the length of the involved ships, the draught of the struck ship, and the angle of collision. T...
Jones, Brett D.; Sahbaz, Sumeyra; Schram, Asta B.; Chittum, Jessica R.
2017-05-01
We investigated students' perceptions related to psychological constructs in their science classes and the influence of these perceptions on their science identification and science career goals. Participants included 575 middle school students from two countries (334 students in the U.S. and 241 students in Iceland). Students completed a self-report questionnaire that included items from several measures. We conducted correlational analyses, confirmatory factor analyses, and structural equation modelling to test our hypotheses. Students' class perceptions (i.e. empowerment, usefulness, success, interest, and caring) were significantly correlated with their science identification, which was correlated positively with their science career goals. Combining students' science class perceptions, science identification, and career goals into one model, we documented that the U.S. and Icelandic samples fit the data reasonably well. However, not all of the hypothesised paths were statistically significant. For example, only students' perceptions of usefulness (for the U.S. and Icelandic students) and success (for the U.S. students only) significantly predicted students' career goals in the full model. Theoretically, our findings are consistent with results from samples of university engineering students, yet different in some ways. Our results provide evidence for the theoretical relationships between students' perceptions of science classes and their career goals.
Confidence scores for prediction models
DEFF Research Database (Denmark)
Gerds, Thomas Alexander; van de Wiel, MA
2011-01-01
In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...
PREDICTED PERCENTAGE DISSATISFIED (PPD) MODEL ...
African Journals Online (AJOL)
HOD
their low power requirements, are relatively cheap and are environment friendly. ... PREDICTED PERCENTAGE DISSATISFIED MODEL EVALUATION OF EVAPORATIVE COOLING ... The performance of direct evaporative coolers is a.
EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH
Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.
2014-01-01
The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...
Nonlinear chaotic model for predicting storm surges
Directory of Open Access Journals (Sweden)
M. Siek
2010-09-01
Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.
Bootstrap prediction and Bayesian prediction under misspecified models
Fushiki, Tadayoshi
2005-01-01
We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...
Porto, P.; Cogliandro, V.; Callegari, G.
2018-01-01
In this paper, long-term sediment yield data, collected in a small (1.38 ha) Calabrian catchment (W2), reafforested with eucalyptus trees (Eucalyptus occidentalis Engl.) are used to validate the performance of the SEdiment Delivery Distributed Model (SEDD) in areas with high erosion rates. At first step, the SEDD model was calibrated using field data collected in previous field campaigns undertaken during the period 1978-1994. This first phase allowed the model calibration parameter β to be calculated using direct measurements of rainfall, runoff, and sediment output. The model was then validated in its calibrated form for an independent period (2006-2016) for which new measurements of rainfall, runoff and sediment output are also available. The analysis, carried out at event and annual scale showed good agreement between measured and predicted values of sediment yield and suggested that the SEDD model can be seen as an appropriate means of evaluating erosion risk associated with manmade plantations in marginal areas. Further work is however required to test the performance of the SEDD model as a prediction tool in different geomorphic contexts.
Melanoma Risk Prediction Models
Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Predictive models of moth development
Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...
Predictive Models and Computational Embryology
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
Mennini, Francesco Saverio; Marcellusi, Andrea; Andreoni, Massimo; Gasbarrini, Antonio; Salomone, Salvatore; Craxì, Antonio
2014-01-01
treated with anti-HCV drugs. A reduction of health care costs is associated with a prevalence decrease. Indeed, once the spending peak is reached during this decade (about €527 million), the model predicts a cost reduction in the following 18 years. In 2030, based on the more effective treatments currently available, the direct health care cost associated with the management of HCV patients may reach €346 million (-34.3% compared to 2012). The first scenario (new treatment in 2015 with SVR =90% and same number of treated patients) was associated with a significant reduction in HCV-induced clinical consequences (prevalence =-3%) and a decrease in direct health care expenses, corresponding to €11.1 million. The second scenario (increase in treated patients to 12,790) produced an incremental cost reduction of €7.3 million, reaching a net decrease equal to €18.4 million. In the third scenario (treated patients =16,770), a higher net direct health care cost decrease versus the base-case (€44.0 million) was estimated. Our model showed that the introduction of new treatments that are more effective could result in a quasi-eradication of HCV, with a very strong reduction in prevalence.
Results of the naive quark model
International Nuclear Information System (INIS)
Gignoux, C.
1987-10-01
The hypotheses and limits of the naive quark model are recalled and results on nucleon-nucleon scattering and possible multiquark states are presented. Results show that with this model, ropers do not come. For hadron-hadron interactions, the model predicts Van der Waals forces that the resonance group method does not allow. Known many-body forces are not found in the model. The lack of mesons shows up in the absence of a far reaching force. However, the model does have strengths. It is free from spuriousness of center of mass, and allows a democratic handling of flavor. It has few parameters, and its predictions are very good [fr
Directory of Open Access Journals (Sweden)
Julies Hariani Sugiaman
2011-03-01
Full Text Available Model study is one of the standard orthodontic components which is important for diagnosis and treatment plan, but in some patients with the high gag reflex, it will be difficult to get this kind of study models. The existence of a new device which is able to show the condition of patients' mouth in three space areas (axial, sagittal, and coronal is expected to be an alternative when a study model is difficult to get. The purpose of this study is to find out whether or not there are any differences on the size of canine's mesiodistal, first and second premolar resulted from CBCT imaging with Moyers analysis on the study models. The method of the research is comparative descriptive. Measurements are made on 10 CBCT imaging results and 10 study models. The mesiodistal size, the result of CBCT imaging is measured by the available computer program and also the mesiodistal size of the study models is measured using a sliding compass, and then the size of canines, first and second premolar teeth resulted from CBCT imaging are compared to the result of Moyers method analysis on the study models. The t-test is used to find out if there is a difference between teeth size value between the CBCT imaging with the study models. The significance is determined based on the p-value t table.
Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling
Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.
2017-12-01
Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model
FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...
African Journals Online (AJOL)
FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL STRESSES IN ... the transverse residual stress in the x-direction (σx) had a maximum value of 375MPa ... the finite element method are in fair agreement with the experimental results.
Evaluation of CASP8 model quality predictions
Cozzetto, Domenico; Kryshtafovych, Andriy; Tramontano, Anna
2009-01-01
established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic
Jara-Aguirre, Jose C; Smeets, Steven W; Wockenfus, Amy M; Karon, Brad S
2018-05-01
Evaluate the effects of blood gas sample contamination with total parenteral nutrition (TPN)/lipid emulsion and dextrose 50% (D50) solutions on blood gas and electrolyte measurement; and determine whether glucose concentration can predict blood gas sample contamination with TPN/lipid emulsion or D50. Residual lithium heparin arterial blood gas samples were spiked with TPN/lipid emulsion (0 to 15%) and D50 solutions (0 to 2.5%). Blood gas (pH, pCO2, pO2), electrolytes (Na+, K+ ionized calcium) and hemoglobin were measured with a Radiometer ABL90. Glucose concentration was measured in separated plasma by Roche Cobas c501. Chart review of neonatal blood gas results with glucose >300 mg/dL (>16.65 mmol/L) over a seven month period was performed to determine whether repeat (within 4 h) blood gas results suggested pre-analytical errors in blood gas results. Results were used to determine whether a glucose threshold could predict contamination resulting in blood gas and electrolyte results with greater than laboratory-defined allowable error. Samples spiked with 5% or more TPN/lipid emulsion solution or 1% D50 showed glucose concentration >500 mg/dL (>27.75 mmol/L) and produced blood gas (pH, pO 2 , pCO 2 ) results with greater than laboratory-defined allowable error. TPN/lipid emulsion, but not D50, produced greater than allowable error in electrolyte (Na + ,K + ,Ca ++ ,Hb) results at these concentrations. Based on chart review of 144 neonatal blood gas results with glucose >250 mg/dL received over seven months, four of ten neonatal intensive care unit (NICU) patients with glucose results >500 mg/dL and repeat blood gas results within 4 h had results highly suggestive of pre-analytical error. Only 3 of 36 NICU patients with glucose results 300-500 mg/dL and repeat blood gas results within 4 h had clear pre-analytical errors in blood gas results. Glucose concentration can be used as an indicator of significant blood sample contamination with either TPN
Directory of Open Access Journals (Sweden)
Mennini FS
2014-06-01
randomized clinical trial (RCT relating to boceprevir and telaprevir. For genotypes 2/3 patients it was assumed that treatment efficacy with dual therapy was equal to a SVR rate from the literature. According to the aim of this study, only direct health care costs (hospital admissions, drugs, treatment, and care of patients incurred by the Italian NHS have been included in the model. Costs have been extrapolated using the published scientific literature available in Italy and actualized with the 2012 ISTAT (Istituto Nazionale di Statistica Price Index system for monetary revaluation. Three different scenarios were assumed in order to evaluate the impact of future anti-HCV treatments on the burden of disease.Results: Overall, in Italy, 1.2 million infected subjects were estimated in 2012. Of these, about 211,000 patients were diagnosed, while only about 11,800 subjects were actually being treated with anti-HCV drugs. A reduction of health care costs is associated with a prevalence decrease. Indeed, once the spending peak is reached during this decade (about €527 million, the model predicts a cost reduction in the following 18 years. In 2030, based on the more effective treatments currently available, the direct health care cost associated with the management of HCV patients may reach €346 million (−34.3% compared to 2012. The first scenario (new treatment in 2015 with SVR =90% and same number of treated patients was associated with a significant reduction in HCV-induced clinical consequences (prevalence =−3% and a decrease in direct health care expenses, corresponding to €11.1 million. The second scenario (increase in treated patients to 12,790 produced an incremental cost reduction of €7.3 million, reaching a net decrease equal to €18.4 million. In the third scenario (treated patients =16,770, a higher net direct health care cost decrease versus the base-case (€44.0 million was estimated.Conclusion: Our model showed that the introduction of new treatments
Bayesian Predictive Models for Rayleigh Wind Speed
DEFF Research Database (Denmark)
Shahirinia, Amir; Hajizadeh, Amin; Yu, David C
2017-01-01
predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior...... and predictive inferences under different reasonable choices of prior distribution in sensitivity analysis have been presented....
Energy Technology Data Exchange (ETDEWEB)
Jeong, Hyojoon; Kim, Eunhan; Jeong, Haesun; Hwang, Wontae; Han, Moonhee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
There remains disagreement regarding the application of a Gaussian plume model in PAVAN, as it relates to the complicated geographical features of a coastal area. Therefore, this study was performed in order to figure out the characteristics of the PAVAN program that was developed based on the equations of Gaussian Plume Model, which reflected the actual measured concentration of radioactive materials released to the air. It also evaluated the appropriateness of using a Gaussian plume model for assessing the environmental impact of radiation from a nuclear power plant. In order to analyze the dispersion characteristics of radioactive materials released into the air from the Wolsong nuclear power plant, SF{sub 6} gas was released from the site at night for one hour under stable atmospheric conditions disadvantageous to dilute a tracer gas in this study. The measured concentrations were compared with theoretical estimates derived from meteorological data observed during the experiment period to evaluate the prediction capabilities of the Gaussian plume model. This study conducted a tracer dispersion experiment at the site of Wolsong Nuclear Power Plant site in Korea to analyze the atmospheric dispersion characteristics of radioactive materials. It compared the experimental value with the calculated value using the Gaussian Plume Model as suggested in Reg. 1.145, based on the meteorological data observed in the experiment time period, and evaluated the conservative estimate of the calculated value. In the area where the calculated value is relatively high, the calculated value tends to show higher than the experimental value, which confirmed the conservative manner of the estimating of the calculated value using the Gaussian Plume Model. The short-term exposure of radiation to a human body caused by a nuclear accident would be higher in the area where the atmospheric concentration of radiation is high. Therefore, it is a sufficiently conservative manner to use the
International Nuclear Information System (INIS)
Jeong, Hyojoon; Kim, Eunhan; Jeong, Haesun; Hwang, Wontae; Han, Moonhee
2014-01-01
There remains disagreement regarding the application of a Gaussian plume model in PAVAN, as it relates to the complicated geographical features of a coastal area. Therefore, this study was performed in order to figure out the characteristics of the PAVAN program that was developed based on the equations of Gaussian Plume Model, which reflected the actual measured concentration of radioactive materials released to the air. It also evaluated the appropriateness of using a Gaussian plume model for assessing the environmental impact of radiation from a nuclear power plant. In order to analyze the dispersion characteristics of radioactive materials released into the air from the Wolsong nuclear power plant, SF 6 gas was released from the site at night for one hour under stable atmospheric conditions disadvantageous to dilute a tracer gas in this study. The measured concentrations were compared with theoretical estimates derived from meteorological data observed during the experiment period to evaluate the prediction capabilities of the Gaussian plume model. This study conducted a tracer dispersion experiment at the site of Wolsong Nuclear Power Plant site in Korea to analyze the atmospheric dispersion characteristics of radioactive materials. It compared the experimental value with the calculated value using the Gaussian Plume Model as suggested in Reg. 1.145, based on the meteorological data observed in the experiment time period, and evaluated the conservative estimate of the calculated value. In the area where the calculated value is relatively high, the calculated value tends to show higher than the experimental value, which confirmed the conservative manner of the estimating of the calculated value using the Gaussian Plume Model. The short-term exposure of radiation to a human body caused by a nuclear accident would be higher in the area where the atmospheric concentration of radiation is high. Therefore, it is a sufficiently conservative manner to use the Gaussian
Massive Predictive Modeling using Oracle R Enterprise
CERN. Geneva
2014-01-01
R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...
Multi-model analysis in hydrological prediction
Lanthier, M.; Arsenault, R.; Brissette, F.
2017-12-01
Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been
Accuracy assessment of landslide prediction models
International Nuclear Information System (INIS)
Othman, A N; Mohd, W M N W; Noraini, S
2014-01-01
The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones
Jarrett, Angela M.; Hormuth, David A.; Barnes, Stephanie L.; Feng, Xinzeng; Huang, Wei; Yankeelov, Thomas E.
2018-05-01
Clinical methods for assessing tumor response to therapy are largely rudimentary, monitoring only temporal changes in tumor size. Our goal is to predict the response of breast tumors to therapy using a mathematical model that utilizes magnetic resonance imaging (MRI) data obtained non-invasively from individual patients. We extended a previously established, mechanically coupled, reaction-diffusion model for predicting tumor response initialized with patient-specific diffusion weighted MRI (DW-MRI) data by including the effects of chemotherapy drug delivery, which is estimated using dynamic contrast-enhanced (DCE-) MRI data. The extended, drug incorporated, model is initialized using patient-specific DW-MRI and DCE-MRI data. Data sets from five breast cancer patients were used—obtained before, after one cycle, and at mid-point of neoadjuvant chemotherapy. The DCE-MRI data was used to estimate spatiotemporal variations in tumor perfusion with the extended Kety–Tofts model. The physiological parameters derived from DCE-MRI were used to model changes in delivery of therapy drugs within the tumor for incorporation in the extended model. We simulated the original model and the extended model in both 2D and 3D and compare the results for this five-patient cohort. Preliminary results show reductions in the error of model predicted tumor cellularity and size compared to the experimentally-measured results for the third MRI scan when therapy was incorporated. Comparing the two models for agreement between the predicted total cellularity and the calculated total cellularity (from the DW-MRI data) reveals an increased concordance correlation coefficient from 0.81 to 0.98 for the 2D analysis and 0.85 to 0.99 for the 3D analysis (p < 0.01 for each) when the extended model was used in place of the original model. This study demonstrates the plausibility of using DCE-MRI data as a means to estimate drug delivery on a patient-specific basis in predictive models and
Clinical Predictive Modeling Development and Deployment through FHIR Web Services.
Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng
2015-01-01
Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.
Evaluation of CASP8 model quality predictions
Cozzetto, Domenico
2009-01-01
The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.
Finding Furfural Hydrogenation Catalysts via Predictive Modelling.
Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi
2010-09-10
We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.
Results of steel containment vessel model test
International Nuclear Information System (INIS)
Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.
1998-05-01
A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed
Model predictive control using fuzzy decision functions
Kaymak, U.; Costa Sousa, da J.M.
2001-01-01
Fuzzy predictive control integrates conventional model predictive control with techniques from fuzzy multicriteria decision making, translating the goals and the constraints to predictive control in a transparent way. The information regarding the (fuzzy) goals and the (fuzzy) constraints of the
Finding Furfural Hydrogenation Catalysts via Predictive Modelling
Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi
2010-01-01
Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388
Engineering model cryocooler test results
International Nuclear Information System (INIS)
Skimko, M.A.; Stacy, W.D.; McCormick, J.A.
1992-01-01
This paper reports that recent testing of diaphragm-defined, Stirling-cycle machines and components has demonstrated cooling performance potential, validated the design code, and confirmed several critical operating characteristics. A breadboard cryocooler was rebuilt and tested from cryogenic to near-ambient cold end temperatures. There was a significant increase in capacity at cryogenic temperatures and the performance results compared will with code predictions at all temperatures. Further testing on a breadboard diaphragm compressor validated the calculated requirement for a minimum axial clearance between diaphragms and mating heads
Prediction of pipeline corrosion rate based on grey Markov models
International Nuclear Information System (INIS)
Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin
2009-01-01
Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)
PREDICTIVE CAPACITY OF ARCH FAMILY MODELS
Directory of Open Access Journals (Sweden)
Raphael Silveira Amaro
2016-03-01
Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.
Alcator C-Mod predictive modeling
International Nuclear Information System (INIS)
Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas
2001-01-01
Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles
Raikova, Rositsa; Aladjov, Hristo
2003-06-01
A critical point in models of the human limbs when the aim is to investigate the motor control is the muscle model. More often the mechanical output of a muscle is considered as one musculotendon force that is a design variable in optimization tasks solved predominantly by static optimization. For dynamic conditions, the relationship between the developed force, the length and the contraction velocity of a muscle becomes important and rheological muscle models can be incorporated in the optimization tasks. Here the muscle activation can be a design variable as well. Recently a new muscle model was proposed. A muscle is considered as a mixture of motor units (MUs) with different peculiarities and the muscle force is calculated as a sum of the MUs twitches. The aim of the paper is to compare these three ways for presenting the muscle force. Fast elbow flexion is investigated using a planar model with five muscles. It is concluded that the rheological models are suitable for calculation of the current maximal muscle forces that can be used as weight factors in the objective functions. The model based on MUs has many advantages for precise investigations of motor control. Such muscle presentation can explain the muscle co-contraction and the role of the fast and the slow MUs. The relationship between the MUs activation and the mechanical output is more clear and closer to the reality.
Interpreting Results from the Multinomial Logit Model
DEFF Research Database (Denmark)
Wulff, Jesper
2015-01-01
This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...... suitable for both interpretation and communication of results. The pratical steps are illustrated through an application of the MLM to the choice of foreign market entry mode.......This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there seem...... to be systematic issues with regard to how researchers interpret their results when using the MLM. In this study, I present a set of guidelines critical to analyzing and interpreting results from the MLM. The procedure involves intuitive graphical representations of predicted probabilities and marginal effects...
NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES
Directory of Open Access Journals (Sweden)
SILVA R. G.
1999-01-01
Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.
Comparison of two ordinal prediction models
DEFF Research Database (Denmark)
Kattan, Michael W; Gerds, Thomas A
2015-01-01
system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....
Directory of Open Access Journals (Sweden)
Muminović Saša
2012-06-01
Full Text Available Although they can be a subject of criticism and have been challenged from its beginnings, bankruptcy prediction models have often been used in practice for more than four decades. The following models for predicting bankruptcy are applied in this issue: Altman's Z'-Score, Zmijewski model, Taffler's model and Sandin and Porporato model. Out of the creditworthiness assessment models (solvency analysis, the following models have been applied: Z''-Score (Altman, Hartzell and Peck and the BEX model. A significant shortcoming of the observed models is their failure to take investment into account. Beside, some models have inadequately assessed the transition of operations to lohn production, while others have not.
A stepwise model to predict monthly streamflow
Mahmood Al-Juboori, Anas; Guven, Aytac
2016-12-01
In this study, a stepwise model empowered with genetic programming is developed to predict the monthly flows of Hurman River in Turkey and Diyalah and Lesser Zab Rivers in Iraq. The model divides the monthly flow data to twelve intervals representing the number of months in a year. The flow of a month, t is considered as a function of the antecedent month's flow (t - 1) and it is predicted by multiplying the antecedent monthly flow by a constant value called K. The optimum value of K is obtained by a stepwise procedure which employs Gene Expression Programming (GEP) and Nonlinear Generalized Reduced Gradient Optimization (NGRGO) as alternative to traditional nonlinear regression technique. The degree of determination and root mean squared error are used to evaluate the performance of the proposed models. The results of the proposed model are compared with the conventional Markovian and Auto Regressive Integrated Moving Average (ARIMA) models based on observed monthly flow data. The comparison results based on five different statistic measures show that the proposed stepwise model performed better than Markovian model and ARIMA model. The R2 values of the proposed model range between 0.81 and 0.92 for the three rivers in this study.
Poisson Mixture Regression Models for Heart Disease Prediction.
Mufudza, Chipo; Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.
A predictive model for dimensional errors in fused deposition modeling
DEFF Research Database (Denmark)
Stolfi, A.
2015-01-01
This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....
International Nuclear Information System (INIS)
Shatskiy, A. A.; Kovalev, Yu. Yu.; Novikov, I. D.
2015-01-01
The characteristic and distinctive features of the visibility amplitude of interferometric observations for compact objects like stars in the immediate vicinity of the central black hole in our Galaxy are considered. These features are associated with the specifics of strong gravitational scattering of point sources by black holes, wormholes, or black-white holes. The revealed features will help to determine the most important topological characteristics of the central object in our Galaxy: whether this object possesses the properties of only a black hole or also has characteristics unique to wormholes or black-white holes. These studies can be used to interpret the results of optical, infrared, and radio interferometric observations
Energy Technology Data Exchange (ETDEWEB)
Shatskiy, A. A., E-mail: shatskiy@asc.rssi.ru; Kovalev, Yu. Yu.; Novikov, I. D. [Russian Academy of Sciences, Astro Space Center, Lebedev Physical Institute (Russian Federation)
2015-05-15
The characteristic and distinctive features of the visibility amplitude of interferometric observations for compact objects like stars in the immediate vicinity of the central black hole in our Galaxy are considered. These features are associated with the specifics of strong gravitational scattering of point sources by black holes, wormholes, or black-white holes. The revealed features will help to determine the most important topological characteristics of the central object in our Galaxy: whether this object possesses the properties of only a black hole or also has characteristics unique to wormholes or black-white holes. These studies can be used to interpret the results of optical, infrared, and radio interferometric observations.
Model Prediction Control For Water Management Using Adaptive Prediction Accuracy
Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.
2014-01-01
In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for
Electrostatic ion thrusters - towards predictive modeling
Energy Technology Data Exchange (ETDEWEB)
Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)
2014-02-15
The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Iowa calibration of MEPDG performance prediction models.
2013-06-01
This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...
Model complexity control for hydrologic prediction
Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.
2008-01-01
A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore
A statistical model for predicting muscle performance
Byerly, Diane Leslie De Caix
The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing
Real-time eSports Match Result Prediction
Yang, Yifan; Qin, Tian; Lei, Yu-Heng
2016-01-01
In this paper, we try to predict the winning team of a match in the multiplayer eSports game Dota 2. To address the weaknesses of previous work, we consider more aspects of prior (pre-match) features from individual players' match history, as well as real-time (during-match) features at each minute as the match progresses. We use logistic regression, the proposed Attribute Sequence Model, and their combinations as the prediction models. In a dataset of 78362 matches where 20631 matches contai...
DEFF Research Database (Denmark)
Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty
the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...
Nonlinear model predictive control theory and algorithms
Grüne, Lars
2017-01-01
This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...
Directory of Open Access Journals (Sweden)
Jing Lu
2014-11-01
Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.
Staying Power of Churn Prediction Models
Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.
In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging
Predictive user modeling with actionable attributes
Zliobaite, I.; Pechenizkiy, M.
2013-01-01
Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target
Foundation Settlement Prediction Based on a Novel NGM Model
Directory of Open Access Journals (Sweden)
Peng-Yu Chen
2014-01-01
Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.
Plant control using embedded predictive models
International Nuclear Information System (INIS)
Godbole, S.S.; Gabler, W.E.; Eschbach, S.L.
1990-01-01
B and W recently undertook the design of an advanced light water reactor control system. A concept new to nuclear steam system (NSS) control was developed. The concept, which is called the Predictor-Corrector, uses mathematical models of portions of the controlled NSS to calculate, at various levels within the system, demand and control element position signals necessary to satisfy electrical demand. The models give the control system the ability to reduce overcooling and undercooling of the reactor coolant system during transients and upsets. Two types of mathematical models were developed for use in designing and testing the control system. One model was a conventional, comprehensive NSS model that responds to control system outputs and calculates the resultant changes in plant variables that are then used as inputs to the control system. Two other models, embedded in the control system, were less conventional, inverse models. These models accept as inputs plant variables, equipment states, and demand signals and predict plant operating conditions and control element states that will satisfy the demands. This paper reports preliminary results of closed-loop Reactor Coolant (RC) pump trip and normal load reduction testing of the advanced concept. Results of additional transient testing, and of open and closed loop stability analyses will be reported as they are available
Modelling rainfall erosion resulting from climate change
Kinnell, Peter
2016-04-01
It is well known that soil erosion leads to agricultural productivity decline and contributes to water quality decline. The current widely used models for determining soil erosion for management purposes in agriculture focus on long term (~20 years) average annual soil loss and are not well suited to determining variations that occur over short timespans and as a result of climate change. Soil loss resulting from rainfall erosion is directly dependent on the product of runoff and sediment concentration both of which are likely to be influenced by climate change. This presentation demonstrates the capacity of models like the USLE, USLE-M and WEPP to predict variations in runoff and erosion associated with rainfall events eroding bare fallow plots in the USA with a view to modelling rainfall erosion in areas subject to climate change.
Robust predictions of the interacting boson model
International Nuclear Information System (INIS)
Casten, R.F.; Koeln Univ.
1994-01-01
While most recognized for its symmetries and algebraic structure, the IBA model has other less-well-known but equally intrinsic properties which give unavoidable, parameter-free predictions. These predictions concern central aspects of low-energy nuclear collective structure. This paper outlines these ''robust'' predictions and compares them with the data
Comparison of Prediction-Error-Modelling Criteria
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...
Validated predictive modelling of the environmental resistome.
Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H
2015-06-01
Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome.
Data Driven Economic Model Predictive Control
Directory of Open Access Journals (Sweden)
Masoud Kheradmandi
2018-04-01
Full Text Available This manuscript addresses the problem of data driven model based economic model predictive control (MPC design. To this end, first, a data-driven Lyapunov-based MPC is designed, and shown to be capable of stabilizing a system at an unstable equilibrium point. The data driven Lyapunov-based MPC utilizes a linear time invariant (LTI model cognizant of the fact that the training data, owing to the unstable nature of the equilibrium point, has to be obtained from closed-loop operation or experiments. Simulation results are first presented demonstrating closed-loop stability under the proposed data-driven Lyapunov-based MPC. The underlying data-driven model is then utilized as the basis to design an economic MPC. The economic improvements yielded by the proposed method are illustrated through simulations on a nonlinear chemical process system example.
Predictions of models for environmental radiological assessment
International Nuclear Information System (INIS)
Peres, Sueli da Silva; Lauria, Dejanira da Costa; Mahler, Claudio Fernando
2011-01-01
In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for 137 Cs and 60 Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)
Modelling microbial interactions and food structure in predictive microbiology
Malakar, P.K.
2002-01-01
Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.
Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of
Artificial Neural Network Model for Predicting Compressive
Directory of Open Access Journals (Sweden)
Salim T. Yousif
2013-05-01
Full Text Available Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature. The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor affecting the output of the model. The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.
Extracting falsifiable predictions from sloppy models.
Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P
2007-12-01
Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.
The prediction of epidemics through mathematical modeling.
Schaus, Catherine
2014-01-01
Mathematical models may be resorted to in an endeavor to predict the development of epidemics. The SIR model is one of the applications. Still too approximate, the use of statistics awaits more data in order to come closer to reality.
Calibration of PMIS pavement performance prediction models.
2012-02-01
Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...
Tectonic predictions with mantle convection models
Coltice, Nicolas; Shephard, Grace E.
2018-04-01
Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough
Case studies in archaeological predictive modelling
Verhagen, Jacobus Wilhelmus Hermanus Philippus
2007-01-01
In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing
Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M
2015-07-01
Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.
A model to predict the power output from wind farms
Energy Technology Data Exchange (ETDEWEB)
Landberg, L. [Riso National Lab., Roskilde (Denmark)
1997-12-31
This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.
Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.
Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F
2013-04-01
In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.
An Anisotropic Hardening Model for Springback Prediction
Zeng, Danielle; Xia, Z. Cedric
2005-08-01
As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.
An Anisotropic Hardening Model for Springback Prediction
International Nuclear Information System (INIS)
Zeng, Danielle; Xia, Z. Cedric
2005-01-01
As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test
Incorporating uncertainty in predictive species distribution modelling.
Beale, Colin M; Lennon, Jack J
2012-01-19
Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.
PREDICTION MODELS OF GRAIN YIELD AND CHARACTERIZATION
Directory of Open Access Journals (Sweden)
Narciso Ysac Avila Serrano
2009-06-01
Full Text Available With the objective to characterize the grain yield of five cowpea cultivars and to find linear regression models to predict it, a study was developed in La Paz, Baja California Sur, Mexico. A complete randomized blocks design was used. Simple and multivariate analyses of variance were carried out using the canonical variables to characterize the cultivars. The variables cluster per plant, pods per plant, pods per cluster, seeds weight per plant, seeds hectoliter weight, 100-seed weight, seeds length, seeds wide, seeds thickness, pods length, pods wide, pods weight, seeds per pods, and seeds weight per pods, showed significant differences (Pâ‰¤ 0.05 among cultivars. PaceÃ±o and IT90K-277-2 cultivars showed the higher seeds weight per plant. The linear regression models showed correlation coefficients â‰¥0.92. In these models, the seeds weight per plant, pods per cluster, pods per plant, cluster per plant and pods length showed significant correlations (Pâ‰¤ 0.05. In conclusion, the results showed that grain yield differ among cultivars and for its estimation, the prediction models showed determination coefficients highly dependable.
Model Predictive Control for Smart Energy Systems
DEFF Research Database (Denmark)
Halvgaard, Rasmus
pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...
Evaluating the Predictive Value of Growth Prediction Models
Murphy, Daniel L.; Gaertner, Matthew N.
2014-01-01
This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…
Predicting Football Matches Results using Bayesian Networks for English Premier League (EPL)
Razali, Nazim; Mustapha, Aida; Yatim, Faiz Ahmad; Aziz, Ruhaya Ab
2017-08-01
The issues of modeling asscoiation football prediction model has become increasingly popular in the last few years and many different approaches of prediction models have been proposed with the point of evaluating the attributes that lead a football team to lose, draw or win the match. There are three types of approaches has been considered for predicting football matches results which include statistical approaches, machine learning approaches and Bayesian approaches. Lately, many studies regarding football prediction models has been produced using Bayesian approaches. This paper proposes a Bayesian Networks (BNs) to predict the results of football matches in term of home win (H), away win (A) and draw (D). The English Premier League (EPL) for three seasons of 2010-2011, 2011-2012 and 2012-2013 has been selected and reviewed. K-fold cross validation has been used for testing the accuracy of prediction model. The required information about the football data is sourced from a legitimate site at http://www.football-data.co.uk. BNs achieved predictive accuracy of 75.09% in average across three seasons. It is hoped that the results could be used as the benchmark output for future research in predicting football matches results.
Accurate and dynamic predictive model for better prediction in medicine and healthcare.
Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S
2018-05-01
Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.
Model predictive control classical, robust and stochastic
Kouvaritakis, Basil
2016-01-01
For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...
Modeling, robust and distributed model predictive control for freeway networks
Liu, S.
2016-01-01
In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of
Deep Predictive Models in Interactive Music
Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim
2018-01-01
Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...
Web tools for predictive toxicology model building.
Jeliazkova, Nina
2012-07-01
The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.
Unreachable Setpoints in Model Predictive Control
DEFF Research Database (Denmark)
Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp
2008-01-01
In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...
Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models
David Ebert
2006-01-01
One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...
Fingerprint verification prediction model in hand dermatitis.
Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah
2015-07-01
Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.
Prostate Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Colorectal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Esophageal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Bladder Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Lung Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Breast Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Pancreatic Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Ovarian Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Liver Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Testicular Cancer Risk Prediction Models
Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Cervical Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Modeling and Prediction Using Stochastic Differential Equations
DEFF Research Database (Denmark)
Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp
2016-01-01
Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...
Predictive Model of Systemic Toxicity (SOT)
In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...
Spent fuel: prediction model development
International Nuclear Information System (INIS)
Almassy, M.Y.; Bosi, D.M.; Cantley, D.A.
1979-07-01
The need for spent fuel disposal performance modeling stems from a requirement to assess the risks involved with deep geologic disposal of spent fuel, and to support licensing and public acceptance of spent fuel repositories. Through the balanced program of analysis, diagnostic testing, and disposal demonstration tests, highlighted in this presentation, the goal of defining risks and of quantifying fuel performance during long-term disposal can be attained
Navy Recruit Attrition Prediction Modeling
2014-09-01
have high correlation with attrition, such as age, job characteristics, command climate, marital status, behavior issues prior to recruitment, and the...the additive model. glm(formula = Outcome ~ Age + Gender + Marital + AFQTCat + Pay + Ed + Dep, family = binomial, data = ltraining) Deviance ...0.1 ‘ ‘ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance : 105441 on 85221 degrees of freedom Residual deviance
Predicting and Modeling RNA Architecture
Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice
2011-01-01
SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963
Predictive Models and Computational Toxicology (II IBAMTOX)
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
Finding furfural hydrogenation catalysts via predictive modelling
Strassberger, Z.; Mooijman, M.; Ruijter, E.; Alberts, A.H.; Maldonado, A.G.; Orru, R.V.A.; Rothenberg, G.
2010-01-01
We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes
Mental models accurately predict emotion transitions.
Thornton, Mark A; Tamir, Diana I
2017-06-06
Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.
Mental models accurately predict emotion transitions
Thornton, Mark A.; Tamir, Diana I.
2017-01-01
Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373
Dynamic Simulation of Human Gait Model With Predictive Capability.
Sun, Jinming; Wu, Shaoli; Voglewede, Philip A
2018-03-01
In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.
Return Predictability, Model Uncertainty, and Robust Investment
DEFF Research Database (Denmark)
Lukas, Manuel
Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....
Model predictive Controller for Mobile Robot
Alireza Rezaee
2017-01-01
This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...
Spatial Economics Model Predicting Transport Volume
Directory of Open Access Journals (Sweden)
Lu Bo
2016-10-01
Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.
New encouraging developments in contact prediction: Assessment of the CASP11 results
Monastyrskyy, Bohdan
2015-10-01
© 2015 Wiley Periodicals, Inc. This article provides a report on the state-of-the-art in the prediction of intra-molecular residue-residue contacts in proteins based on the assessment of the predictions submitted to the CASP11 experiment. The assessment emphasis is placed on the accuracy in predicting long-range contacts. Twenty-nine groups participated in contact prediction in CASP11. At least eight of them used the recently developed evolutionary coupling techniques, with the top group (CONSIP2) reaching precision of 27% on target proteins that could not be modeled by homology. This result indicates a breakthrough in the development of methods based on the correlated mutation approach. Successful prediction of contacts was shown to be practically helpful in modeling three-dimensional structures; in particular target T0806 was modeled exceedingly well with accuracy not yet seen for ab initio targets of this size (>250 residues).
Complex versus simple models: ion-channel cardiac toxicity prediction.
Mistry, Hitesh B
2018-01-01
There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.
Complex versus simple models: ion-channel cardiac toxicity prediction
Directory of Open Access Journals (Sweden)
Hitesh B. Mistry
2018-02-01
Full Text Available There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model Bnet was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the Bnet model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.
Comparison of Simple Versus Performance-Based Fall Prediction Models
Directory of Open Access Journals (Sweden)
Shekhar K. Gadkaree BS
2015-05-01
Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.
Predictive validation of an influenza spread model.
Directory of Open Access Journals (Sweden)
Ayaz Hyder
Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve
Predictive Validation of an Influenza Spread Model
Hyder, Ayaz; Buckeridge, David L.; Leung, Brian
2013-01-01
Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive
Prediction of hourly solar radiation with multi-model framework
International Nuclear Information System (INIS)
Wu, Ji; Chan, Chee Keong
2013-01-01
Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others
Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki
2012-01-01
The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.
Analyzing online sentiment to predict telephone poll results.
Fu, King-wa; Chan, Chee-hon
2013-09-01
The telephone survey is a common social science research method for capturing public opinion, for example, an individual's values or attitudes, or the government's approval rating. However, reducing domestic landline usage, increasing nonresponse rate, and suffering from response bias of the interviewee's self-reported data pose methodological challenges to such an approach. Because of the labor cost of administration, a phone survey is often conducted on a biweekly or monthly basis, and therefore a daily reflection of public opinion is usually not available. Recently, online sentiment analysis of user-generated content has been deployed to predict public opinion and human behavior. However, its overall effectiveness remains uncertain. This study seeks to examine the temporal association between online sentiment reflected in social media content and phone survey poll results in Hong Kong. Specifically, it aims to find the extent to which online sentiment can predict phone survey results. Using autoregressive integrated moving average time-series analysis, this study suggested that online sentiment scores can lead phone survey results by about 8-15 days, and their correlation coefficients were about 0.16. The finding is significant to the study of social media in social science research, because it supports the conclusion that daily sentiment observed in social media content can serve as a leading predictor for phone survey results, keeping as much as 2 weeks ahead of the monthly announcement of opinion polls. We also discuss the practical and theoretical implications of this study.
Corporate prediction models, ratios or regression analysis?
Bijnen, E.J.; Wijn, M.F.C.M.
1994-01-01
The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in
Predicting Protein Secondary Structure with Markov Models
DEFF Research Database (Denmark)
Fischer, Paul; Larsen, Simon; Thomsen, Claus
2004-01-01
we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....
Energy based prediction models for building acoustics
DEFF Research Database (Denmark)
Brunskog, Jonas
2012-01-01
In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....
Comparative Study of Bancruptcy Prediction Models
Directory of Open Access Journals (Sweden)
Isye Arieshanti
2013-09-01
Full Text Available Early indication of bancruptcy is important for a company. If companies aware of potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%
Prediction Models for Dynamic Demand Response
Energy Technology Data Exchange (ETDEWEB)
Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.
2015-11-02
As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D^{2}R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D^{2}R, which we address in this paper. Our first contribution is the formal definition of D^{2}R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D^{2}R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D^{2}R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D^{2}R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D^{2}R. Also, prediction models require just few days’ worth of data indicating that small amounts of
Poisson Mixture Regression Models for Heart Disease Prediction
Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611
Impact of modellers' decisions on hydrological a priori predictions
Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.
2014-06-01
In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of
Predicting birth weight with conditionally linear transformation models.
Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten
2016-12-01
Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.
Finite Unification: Theory, Models and Predictions
Heinemeyer, S; Zoupanos, G
2011-01-01
All-loop Finite Unified Theories (FUTs) are very interesting N=1 supersymmetric Grand Unified Theories (GUTs) realising an old field theory dream, and moreover have a remarkable predictive power due to the required reduction of couplings. The reduction of the dimensionless couplings in N=1 GUTs is achieved by searching for renormalization group invariant (RGI) relations among them holding beyond the unification scale. Finiteness results from the fact that there exist RGI relations among dimensional couplings that guarantee the vanishing of all beta-functions in certain N=1 GUTs even to all orders. Furthermore developments in the soft supersymmetry breaking sector of N=1 GUTs and FUTs lead to exact RGI relations, i.e. reduction of couplings, in this dimensionful sector of the theory, too. Based on the above theoretical framework phenomenologically consistent FUTs have been constructed. Here we review FUT models based on the SU(5) and SU(3)^3 gauge groups and their predictions. Of particular interest is the Hig...
Adding propensity scores to pure prediction models fails to improve predictive performance
Directory of Open Access Journals (Sweden)
Amy S. Nowacki
2013-08-01
Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.
Wind farm production prediction - The Zephyr model
Energy Technology Data Exchange (ETDEWEB)
Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)
2002-06-01
This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)
Model predictive controller design of hydrocracker reactors
GÖKÇE, Dila
2011-01-01
This study summarizes the design of a Model Predictive Controller (MPC) in Tüpraş, İzmit Refinery Hydrocracker Unit Reactors. Hydrocracking process, in which heavy vacuum gasoil is converted into lighter and valuable products at high temperature and pressure is described briefly. Controller design description, identification and modeling studies are examined and the model variables are presented. WABT (Weighted Average Bed Temperature) equalization and conversion increase are simulate...
Image-Based Visual Servoing for Manipulation Via Predictive Control – A Survey of Some Results
Directory of Open Access Journals (Sweden)
Corneliu Lazăr
2016-09-01
Full Text Available In this paper, a review of predictive control algorithms developed by the authors for visual servoing of robots in manipulation applications is presented. Using these algorithms, a control predictive framework was created for image-based visual servoing (IBVS systems. Firstly, considering the point features, in the year 2008 we introduced an internal model predictor based on the interaction matrix. Secondly, distinctly from the set-point trajectory, we introduced in 2011 the reference trajectory using the concept from predictive control. Finally, minimizing a sum of squares of predicted errors, the optimal input trajectory was obtained. The new concept of predictive control for IBVS systems was employed to develop a cascade structure for motion control of robot arms. Simulation results obtained with a simulator for predictive IBVS systems are also presented.
A burnout prediction model based around char morphology
Energy Technology Data Exchange (ETDEWEB)
T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre
2005-07-01
Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.
Multi-Model Ensemble Wake Vortex Prediction
Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.
2015-01-01
Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.
Risk terrain modeling predicts child maltreatment.
Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye
2016-12-01
As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright Â© 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Evaluation of wave runup predictions from numerical and parametric models
Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.
2014-01-01
Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.
Preoperative prediction model of outcome after cholecystectomy for symptomatic gallstones
DEFF Research Database (Denmark)
Borly, L; Anderson, I B; Bardram, L
1999-01-01
and sonography evaluated gallbladder motility, gallstones, and gallbladder volume. Preoperative variables in patients with or without postcholecystectomy pain were compared statistically, and significant variables were combined in a logistic regression model to predict the postoperative outcome. RESULTS: Eighty...... and by the absence of 'agonizing' pain and of symptoms coinciding with pain (P model 15 of 18 predicted patients had postoperative pain (PVpos = 0.83). Of 62 patients predicted as having no pain postoperatively, 56 were pain-free (PVneg = 0.90). Overall accuracy...... was 89%. CONCLUSION: From this prospective study a model based on preoperative symptoms was developed to predict postcholecystectomy pain. Since intrastudy reclassification may give too optimistic results, the model should be validated in future studies....
Modeling for prediction of restrained shrinkage effect in concrete repair
International Nuclear Information System (INIS)
Yuan Yingshu; Li Guo; Cai Yue
2003-01-01
A general model of autogenous shrinkage caused by chemical reaction (chemical shrinkage) is developed by means of Arrhenius' law and a degree of chemical reaction. Models of tensile creep and relaxation modulus are built based on a viscoelastic, three-element model. Tests of free shrinkage and tensile creep were carried out to determine some coefficients in the models. Two-dimensional FEM analysis based on the models and other constitutions can predict the development of tensile strength and cracking. Three groups of patch-repaired beams were designed for analysis and testing. The prediction from the analysis shows agreement with the test results. The cracking mechanism after repair is discussed
Stage-specific predictive models for breast cancer survivability.
Kate, Rohit J; Nadig, Ramya
2017-01-01
Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright Â© 2016 Elsevier Ireland Ltd. All rights reserved.
Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.
2015-01-01
We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167
A prediction method based on wavelet transform and multiple models fusion for chaotic time series
International Nuclear Information System (INIS)
Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha
2017-01-01
In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.
Global vegetation change predicted by the modified Budyko model
Energy Technology Data Exchange (ETDEWEB)
Monserud, R.A.; Tchebakova, N.M.; Leemans, R. (US Department of Agriculture, Moscow, ID (United States). Intermountain Research Station, Forest Service)
1993-09-01
A modified Budyko global vegetation model is used to predict changes in global vegetation patterns resulting from climate change (CO[sub 2] doubling). Vegetation patterns are predicted using a model based on a dryness index and potential evaporation determined by solving radiation balance equations. Climate change scenarios are derived from predictions from four General Circulation Models (GCM's) of the atmosphere (GFDL, GISS, OSU, and UKMO). All four GCM scenarios show similar trends in vegetation shifts and in areas that remain stable, although the UKMO scenario predicts greater warming than the others. Climate change maps produced by all four GCM scenarios show good agreement with the current climate vegetation map for the globe as a whole, although over half of the vegetation classes show only poor to fair agreement. The most stable areas are Desert and Ice/Polar Desert. Because most of the predicted warming is concentrated in the Boreal and Temperate zones, vegetation there is predicted to undergo the greatest change. Most vegetation classes in the Subtropics and Tropics are predicted to expand. Any shift in the Tropics favouring either Forest over Savanna, or vice versa, will be determined by the magnitude of the increased precipitation accompanying global warming. Although the model predicts equilibrium conditions to which many plant species cannot adjust (through migration or microevolution) in the 50-100 y needed for CO[sub 2] doubling, it is not clear if projected global warming will result in drastic or benign vegetation change. 72 refs., 3 figs., 3 tabs.
Modelling the predictive performance of credit scoring
Directory of Open Access Journals (Sweden)
Shi-Wei Shen
2013-07-01
Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.
A neighborhood statistics model for predicting stream pathogen indicator levels.
Pandey, Pramod K; Pasternack, Gregory B; Majumder, Mahbubul; Soupir, Michelle L; Kaiser, Mark S
2015-03-01
Because elevated levels of water-borne Escherichia coli in streams are a leading cause of water quality impairments in the U.S., water-quality managers need tools for predicting aqueous E. coli levels. Presently, E. coli levels may be predicted using complex mechanistic models that have a high degree of unchecked uncertainty or simpler statistical models. To assess spatio-temporal patterns of instream E. coli levels, herein we measured E. coli, a pathogen indicator, at 16 sites (at four different times) within the Squaw Creek watershed, Iowa, and subsequently, the Markov Random Field model was exploited to develop a neighborhood statistics model for predicting instream E. coli levels. Two observed covariates, local water temperature (degrees Celsius) and mean cross-sectional depth (meters), were used as inputs to the model. Predictions of E. coli levels in the water column were compared with independent observational data collected from 16 in-stream locations. The results revealed that spatio-temporal averages of predicted and observed E. coli levels were extremely close. Approximately 66 % of individual predicted E. coli concentrations were within a factor of 2 of the observed values. In only one event, the difference between prediction and observation was beyond one order of magnitude. The mean of all predicted values at 16 locations was approximately 1 % higher than the mean of the observed values. The approach presented here will be useful while assessing instream contaminations such as pathogen/pathogen indicator levels at the watershed scale.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models.
Directory of Open Access Journals (Sweden)
Saerom Park
Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models.
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.
Predictive analytics can support the ACO model.
Bradley, Paul
2012-04-01
Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.
Predictive performance models and multiple task performance
Wickens, Christopher D.; Larish, Inge; Contorer, Aaron
1989-01-01
Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.
Model Predictive Control of Sewer Networks
DEFF Research Database (Denmark)
Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik
2016-01-01
The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....
Distributed Model Predictive Control via Dual Decomposition
DEFF Research Database (Denmark)
Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle
2014-01-01
This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...
The effect of bathymetric filtering on nearshore process model results
Plant, N.G.; Edwards, K.L.; Kaihatu, J.M.; Veeramony, J.; Hsu, L.; Holland, K.T.
2009-01-01
Nearshore wave and flow model results are shown to exhibit a strong sensitivity to the resolution of the input bathymetry. In this analysis, bathymetric resolution was varied by applying smoothing filters to high-resolution survey data to produce a number of bathymetric grid surfaces. We demonstrate that the sensitivity of model-predicted wave height and flow to variations in bathymetric resolution had different characteristics. Wave height predictions were most sensitive to resolution of cross-shore variability associated with the structure of nearshore sandbars. Flow predictions were most sensitive to the resolution of intermediate scale alongshore variability associated with the prominent sandbar rhythmicity. Flow sensitivity increased in cases where a sandbar was closer to shore and shallower. Perhaps the most surprising implication of these results is that the interpolation and smoothing of bathymetric data could be optimized differently for the wave and flow models. We show that errors between observed and modeled flow and wave heights are well predicted by comparing model simulation results using progressively filtered bathymetry to results from the highest resolution simulation. The damage done by over smoothing or inadequate sampling can therefore be estimated using model simulations. We conclude that the ability to quantify prediction errors will be useful for supporting future data assimilation efforts that require this information.
A burnout prediction model based around char morphology
Energy Technology Data Exchange (ETDEWEB)
Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering
2006-05-15
Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.
Uncertainties in model-based outcome predictions for treatment planning
International Nuclear Information System (INIS)
Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry
2001-01-01
Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment
FPGA implementation of predictive degradation model for engine oil lifetime
Idros, M. F. M.; Razak, A. H. A.; Junid, S. A. M. Al; Suliman, S. I.; Halim, A. K.
2018-03-01
This paper presents the implementation of linear regression model for degradation prediction on Register Transfer Logic (RTL) using QuartusII. A stationary model had been identified in the degradation trend for the engine oil in a vehicle in time series method. As for RTL implementation, the degradation model is written in Verilog HDL and the data input are taken at a certain time. Clock divider had been designed to support the timing sequence of input data. At every five data, a regression analysis is adapted for slope variation determination and prediction calculation. Here, only the negative value are taken as the consideration for the prediction purposes for less number of logic gate. Least Square Method is adapted to get the best linear model based on the mean values of time series data. The coded algorithm has been implemented on FPGA for validation purposes. The result shows the prediction time to change the engine oil.
Numerical Modelling and Prediction of Erosion Induced by Hydrodynamic Cavitation
Peters, A.; Lantermann, U.; el Moctar, O.
2015-12-01
The present work aims to predict cavitation erosion using a numerical flow solver together with a new developed erosion model. The erosion model is based on the hypothesis that collapses of single cavitation bubbles near solid boundaries form high velocity microjets, which cause sonic impacts with high pressure amplitudes damaging the surface. The erosion model uses information from a numerical Euler-Euler flow simulation to predict erosion sensitive areas and assess the erosion aggressiveness of the flow. The obtained numerical results were compared to experimental results from tests of an axisymmetric nozzle.
An Intelligent Model for Stock Market Prediction
Directory of Open Access Journals (Sweden)
IbrahimM. Hamed
2012-08-01
Full Text Available This paper presents an intelligent model for stock market signal prediction using Multi-Layer Perceptron (MLP Artificial Neural Networks (ANN. Blind source separation technique, from signal processing, is integrated with the learning phase of the constructed baseline MLP ANN to overcome the problems of prediction accuracy and lack of generalization. Kullback Leibler Divergence (KLD is used, as a learning algorithm, because it converges fast and provides generalization in the learning mechanism. Both accuracy and efficiency of the proposed model were confirmed through the Microsoft stock, from wall-street market, and various data sets, from different sectors of the Egyptian stock market. In addition, sensitivity analysis was conducted on the various parameters of the model to ensure the coverage of the generalization issue. Finally, statistical significance was examined using ANOVA test.
Predictive Models, How good are they?
DEFF Research Database (Denmark)
Kasch, Helge
The WAD grading system has been used for more than 20 years by now. It has shown long-term viability, but with strengths and limitations. New bio-psychosocial assessment of the acute whiplash injured subject may provide better prediction of long-term disability and pain. Furthermore, the emerging......-up. It is important to obtain prospective identification of the relevant risk underreported disability could, if we were able to expose these hidden “risk-factors” during our consultations, provide us with better predictive models. New data from large clinical studies will present exciting new genetic risk markers...
Consensus models to predict endocrine disruption for all ...
Humans are potentially exposed to tens of thousands of man-made chemicals in the environment. It is well known that some environmental chemicals mimic natural hormones and thus have the potential to be endocrine disruptors. Most of these environmental chemicals have never been tested for their ability to disrupt the endocrine system, in particular, their ability to interact with the estrogen receptor. EPA needs tools to prioritize thousands of chemicals, for instance in the Endocrine Disruptor Screening Program (EDSP). Collaborative Estrogen Receptor Activity Prediction Project (CERAPP) was intended to be a demonstration of the use of predictive computational models on HTS data including ToxCast and Tox21 assays to prioritize a large chemical universe of 32464 unique structures for one specific molecular target – the estrogen receptor. CERAPP combined multiple computational models for prediction of estrogen receptor activity, and used the predicted results to build a unique consensus model. Models were developed in collaboration between 17 groups in the U.S. and Europe and applied to predict the common set of chemicals. Structure-based techniques such as docking and several QSAR modeling approaches were employed, mostly using a common training set of 1677 compounds provided by U.S. EPA, to build a total of 42 classification models and 8 regression models for binding, agonist and antagonist activity. All predictions were evaluated on ToxCast data and on an exte
Predictive ability of boiler production models | Ogundu | Animal ...
African Journals Online (AJOL)
The weekly body weight measurements of a growing strain of Ross broiler were used to compare the of ability of three mathematical models (the multi, linear, quadratic and Exponential) to predict 8 week body weight from early body measurements at weeks I, II, III, IV, V, VI and VII. The results suggest that the three models ...
Validation of a multi-objective, predictive urban traffic model
Wilmink, I.R.; Haak, P. van den; Woldeab, Z.; Vreeswijk, J.
2013-01-01
This paper describes the results of the verification and validation of the ecoStrategic Model, which was developed, implemented and tested in the eCoMove project. The model uses real-time and historical traffic information to determine the current, predicted and desired state of traffic in a
An analysis of seasonal predictability in coupled model forecasts
Energy Technology Data Exchange (ETDEWEB)
Peng, P.; Wang, W. [NOAA, Climate Prediction Center, Washington, DC (United States); Kumar, A. [NOAA, Climate Prediction Center, Washington, DC (United States); NCEP/NWS/NOAA, Climate Prediction Center, Camp Springs, MD (United States)
2011-02-15
In the recent decade, operational seasonal prediction systems based on initialized coupled models have been developed. An analysis of how the predictability of seasonal means in the initialized coupled predictions evolves with lead-time is presented. Because of the short lead-time, such an analysis for the temporal behavior of seasonal predictability involves a mix of both the predictability of the first and the second kind. The analysis focuses on the lead-time dependence of ensemble mean variance, and the forecast spread. Further, the analysis is for a fixed target season of December-January-February, and is for sea surface temperature, rainfall, and 200-mb height. The analysis is based on a large set of hindcasts from an initialized coupled seasonal prediction system. Various aspects of predictability of the first and the second kind are highlighted for variables with long (for example, SST), and fast (for example, atmospheric) adjustment time scale. An additional focus of the analysis is how the predictability in the initialized coupled seasonal predictions compares with estimates based on the AMIP simulations. The results indicate that differences in the set up of AMIP simulations and coupled predictions, for example, representation of air-sea interactions, and evolution of forecast spread from initial conditions do not change fundamental conclusion about the seasonal predictability. A discussion of the analysis presented herein, and its implications for the use of AMIP simulations for climate attribution, and for time-slice experiments to provide regional information, is also included. (orig.)
Using Pareto points for model identification in predictive toxicology
2013-01-01
Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649
Hybrid Corporate Performance Prediction Model Considering Technical Capability
Directory of Open Access Journals (Sweden)
Joonhyuck Lee
2016-07-01
Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.
Prediction models : the right tool for the right problem
Kappen, Teus H.; Peelen, Linda M.
2016-01-01
PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to
Neuro-fuzzy modeling in bankruptcy prediction
Directory of Open Access Journals (Sweden)
Vlachos D.
2003-01-01
Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.
Predictive Models for Carcinogenicity and Mutagenicity ...
Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t
Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M
2017-09-01
Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.
State-space prediction model for chaotic time series
Alparslan, A. K.; Sayar, M.; Atilgan, A. R.
1998-08-01
A simple method for predicting the continuation of scalar chaotic time series ahead in time is proposed. The false nearest neighbors technique in connection with the time-delayed embedding is employed so as to reconstruct the state space. A local forecasting model based upon the time evolution of the topological neighboring in the reconstructed phase space is suggested. A moving root-mean-square error is utilized in order to monitor the error along the prediction horizon. The model is tested for the convection amplitude of the Lorenz model. The results indicate that for approximately 100 cycles of the training data, the prediction follows the actual continuation very closely about six cycles. The proposed model, like other state-space forecasting models, captures the long-term behavior of the system due to the use of spatial neighbors in the state space.
An intermittency model for predicting roughness induced transition
Ge, Xuan; Durbin, Paul
2014-11-01
An extended model for roughness-induced transition is proposed based on an intermittency transport equation for RANS modeling formulated in local variables. To predict roughness effects in the fully turbulent boundary layer, published boundary conditions for k and ω are used, which depend on the equivalent sand grain roughness height, and account for the effective displacement of wall distance origin. Similarly in our approach, wall distance in the transition model for smooth surfaces is modified by an effective origin, which depends on roughness. Flat plate test cases are computed to show that the proposed model is able to predict the transition onset in agreement with a data correlation of transition location versus roughness height, Reynolds number, and inlet turbulence intensity. Experimental data for a turbine cascade are compared with the predicted results to validate the applicability of the proposed model. Supported by NSF Award Number 1228195.
Driver's mental workload prediction model based on physiological indices.
Yan, Shengyuan; Tran, Cong Chi; Wei, Yingying; Habiyaremye, Jean Luc
2017-09-15
Developing an early warning model to predict the driver's mental workload (MWL) is critical and helpful, especially for new or less experienced drivers. The present study aims to investigate the correlation between new drivers' MWL and their work performance, regarding the number of errors. Additionally, the group method of data handling is used to establish the driver's MWL predictive model based on subjective rating (NASA task load index [NASA-TLX]) and six physiological indices. The results indicate that the NASA-TLX and the number of errors are positively correlated, and the predictive model shows the validity of the proposed model with an R 2 value of 0.745. The proposed model is expected to provide a reference value for the new drivers of their MWL by providing the physiological indices, and the driving lesson plans can be proposed to sustain an appropriate MWL as well as improve the driver's work performance.
Modeling, Prediction, and Control of Heating Temperature for Tube Billet
Directory of Open Access Journals (Sweden)
Yachun Mao
2015-01-01
Full Text Available Annular furnaces have multivariate, nonlinear, large time lag, and cross coupling characteristics. The prediction and control of the exit temperature of a tube billet are important but difficult. We establish a prediction model for the final temperature of a tube billet through OS-ELM-DRPLS method. We address the complex production characteristics, integrate the advantages of PLS and ELM algorithms in establishing linear and nonlinear models, and consider model update and data lag. Based on the proposed model, we design a prediction control algorithm for tube billet temperature. The algorithm is validated using the practical production data of Baosteel Co., Ltd. Results show that the model achieves the precision required in industrial applications. The temperature of the tube billet can be controlled within the required temperature range through compensation control method.
Baryogenesis model predicting antimatter in the Universe
International Nuclear Information System (INIS)
Kirilova, D.
2003-01-01
Cosmic ray and gamma-ray data do not rule out antimatter domains in the Universe, separated at distances bigger than 10 Mpc from us. Hence, it is interesting to analyze the possible generation of vast antimatter structures during the early Universe evolution. We discuss a SUSY-condensate baryogenesis model, predicting large separated regions of matter and antimatter. The model provides generation of the small locally observed baryon asymmetry for a natural initial conditions, it predicts vast antimatter domains, separated from the matter ones by baryonically empty voids. The characteristic scale of antimatter regions and their distance from the matter ones is in accordance with observational constraints from cosmic ray, gamma-ray and cosmic microwave background anisotropy data
Acoustic results of the Boeing model 360 whirl tower test
Watts, Michael E.; Jordan, David
1990-09-01
An evaluation is presented for whirl tower test results of the Model 360 helicopter's advanced, high-performance four-bladed composite rotor system intended to facilitate over-200-knot flight. During these performance measurements, acoustic data were acquired by seven microphones. A comparison of whirl-tower tests with theory indicate that theoretical prediction accuracies vary with both microphone position and the inclusion of ground reflection. Prediction errors varied from 0 to 40 percent of the measured signal-to-peak amplitude.
Finding Furfural Hydrogenation Catalysts via Predictive Modelling
Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi
2010-01-01
Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre t...
Predictive Modeling in Actinide Chemistry and Catalysis
Energy Technology Data Exchange (ETDEWEB)
Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-16
These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.
Predicting soil acidification trends at Plynlimon using the SAFE model
Directory of Open Access Journals (Sweden)
B. Reynolds
1997-01-01
Full Text Available The SAFE model has been applied to an acid grassland site, located on base-poor stagnopodzol soils derived from Lower Palaeozoic greywackes. The model predicts that acidification of the soil has occurred in response to increased acid deposition following the industrial revolution. Limited recovery is predicted following the decline in sulphur deposition during the mid to late 1970s. Reducing excess sulphur and NOx deposition in 1998 to 40% and 70% of 1980 levels results in further recovery but soil chemical conditions (base saturation, soil water pH and ANC do not return to values predicted in pre-industrial times. The SAFE model predicts that critical loads (expressed in terms of the (Ca+Mg+K:Alcrit ratio for six vegetation species found in acid grassland communities are not exceeded despite the increase in deposited acidity following the industrial revolution. The relative growth response of selected vegetation species characteristic of acid grassland swards has been predicted using a damage function linking growth to soil solution base cation to aluminium ratio. The results show that very small growth reductions can be expected for 'acid tolerant' plants growing in acid upland soils. For more sensitive species such as Holcus lanatus, SAFE predicts that growth would have been reduced by about 20% between 1951 and 1983, when acid inputs were greatest. Recovery to c. 90% of normal growth (under laboratory conditions is predicted as acidic inputs decline.
Breast cancer risks and risk prediction models.
Engel, Christoph; Fischer, Christine
2015-02-01
BRCA1/2 mutation carriers have a considerably increased risk to develop breast and ovarian cancer. The personalized clinical management of carriers and other at-risk individuals depends on precise knowledge of the cancer risks. In this report, we give an overview of the present literature on empirical cancer risks, and we describe risk prediction models that are currently used for individual risk assessment in clinical practice. Cancer risks show large variability between studies. Breast cancer risks are at 40-87% for BRCA1 mutation carriers and 18-88% for BRCA2 mutation carriers. For ovarian cancer, the risk estimates are in the range of 22-65% for BRCA1 and 10-35% for BRCA2. The contralateral breast cancer risk is high (10-year risk after first cancer 27% for BRCA1 and 19% for BRCA2). Risk prediction models have been proposed to provide more individualized risk prediction, using additional knowledge on family history, mode of inheritance of major genes, and other genetic and non-genetic risk factors. User-friendly software tools have been developed that serve as basis for decision-making in family counseling units. In conclusion, further assessment of cancer risks and model validation is needed, ideally based on prospective cohort studies. To obtain such data, clinical management of carriers and other at-risk individuals should always be accompanied by standardized scientific documentation.
A mathematical model for predicting earthquake occurrence ...
African Journals Online (AJOL)
We consider the continental crust under damage. We use the observed results of microseism in many seismic stations of the world which was established to study the time series of the activities of the continental crust with a view to predicting possible time of occurrence of earthquake. We consider microseism time series ...
Two stage neural network modelling for robust model predictive control.
Patan, Krzysztof
2018-01-01
The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Predicting extinction rates in stochastic epidemic models
International Nuclear Information System (INIS)
Schwartz, Ira B; Billings, Lora; Dykman, Mark; Landsman, Alexandra
2009-01-01
We investigate the stochastic extinction processes in a class of epidemic models. Motivated by the process of natural disease extinction in epidemics, we examine the rate of extinction as a function of disease spread. We show that the effective entropic barrier for extinction in a susceptible–infected–susceptible epidemic model displays scaling with the distance to the bifurcation point, with an unusual critical exponent. We make a direct comparison between predictions and numerical simulations. We also consider the effect of non-Gaussian vaccine schedules, and show numerically how the extinction process may be enhanced when the vaccine schedules are Poisson distributed
Predictive Modeling of the CDRA 4BMS
Coker, Robert F.; Knox, James C.
2016-01-01
As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.
Time dependent patient no-show predictive modelling development.
Huang, Yu-Li; Hanauer, David A
2016-05-09
Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.
Mathematical models for prediction of safety factors for a simply ...
African Journals Online (AJOL)
From the results obtained, mathematical prediction models were developed using a least square regression analysis for bending, shear and deflection modes of failure considered in the study. The results showed that the safety factors for material, dead and live load are not unique, but they are influenced by safety index ...
Outcome Prediction in Mathematical Models of Immune Response to Infection.
Directory of Open Access Journals (Sweden)
Manuel Mai
Full Text Available Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.
Comparison of Linear Prediction Models for Audio Signals
Directory of Open Access Journals (Sweden)
2009-03-01
Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.
Generalised Chou-Yang model and recent results
International Nuclear Information System (INIS)
Fazal-e-Aleem; Rashid, H.
1995-09-01
It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and ρ together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author). 16 refs, 2 figs
Generalised Chou-Yang model and recent results
Energy Technology Data Exchange (ETDEWEB)
Fazal-e-Aleem [International Centre for Theoretical Physics, Trieste (Italy); Rashid, H. [Punjab Univ., Lahore (Pakistan). Centre for High Energy Physics
1996-12-31
It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and {rho} together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author) 16 refs.
Generalised Chou-Yang model and recent results
International Nuclear Information System (INIS)
Fazal-e-Aleem; Rashid, H.
1996-01-01
It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and ρ together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author)
Predictive modeling of coupled multi-physics systems: I. Theory
International Nuclear Information System (INIS)
Cacuci, Dan Gabriel
2014-01-01
Highlights: • We developed “predictive modeling of coupled multi-physics systems (PMCMPS)”. • PMCMPS reduces predicted uncertainties in predicted model responses and parameters. • PMCMPS treats efficiently very large coupled systems. - Abstract: This work presents an innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS).” This methodology takes into account fully the coupling terms between the systems but requires only the computational resources that would be needed to perform predictive modeling on each system separately. The PMCMPS methodology uses the maximum entropy principle to construct an optimal approximation of the unknown a priori distribution based on a priori known mean values and uncertainties characterizing the parameters and responses for both multi-physics models. This “maximum entropy”-approximate a priori distribution is combined, using Bayes’ theorem, with the “likelihood” provided by the multi-physics simulation models. Subsequently, the posterior distribution thus obtained is evaluated using the saddle-point method to obtain analytical expressions for the optimally predicted values for the multi-physics models parameters and responses along with corresponding reduced uncertainties. Noteworthy, the predictive modeling methodology for the coupled systems is constructed such that the systems can be considered sequentially rather than simultaneously, while preserving exactly the same results as if the systems were treated simultaneously. Consequently, very large coupled systems, which could perhaps exceed available computational resources if treated simultaneously, can be treated with the PMCMPS methodology presented in this work sequentially and without any loss of generality or information, requiring just the resources that would be needed if the systems were treated sequentially
Embryo quality predictive models based on cumulus cells gene expression
Directory of Open Access Journals (Sweden)
Devjak R
2016-06-01
Full Text Available Since the introduction of in vitro fertilization (IVF in clinical practice of infertility treatment, the indicators for high quality embryos were investigated. Cumulus cells (CC have a specific gene expression profile according to the developmental potential of the oocyte they are surrounding, and therefore, specific gene expression could be used as a biomarker. The aim of our study was to combine more than one biomarker to observe improvement in prediction value of embryo development. In this study, 58 CC samples from 17 IVF patients were analyzed. This study was approved by the Republic of Slovenia National Medical Ethics Committee. Gene expression analysis [quantitative real time polymerase chain reaction (qPCR] for five genes, analyzed according to embryo quality level, was performed. Two prediction models were tested for embryo quality prediction: a binary logistic and a decision tree model. As the main outcome, gene expression levels for five genes were taken and the area under the curve (AUC for two prediction models were calculated. Among tested genes, AMHR2 and LIF showed significant expression difference between high quality and low quality embryos. These two genes were used for the construction of two prediction models: the binary logistic model yielded an AUC of 0.72 ± 0.08 and the decision tree model yielded an AUC of 0.73 ± 0.03. Two different prediction models yielded similar predictive power to differentiate high and low quality embryos. In terms of eventual clinical decision making, the decision tree model resulted in easy-to-interpret rules that are highly applicable in clinical practice.
Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models
Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A.; Burgueño, Juan; Pérez-Rodríguez, Paulino; de los Campos, Gustavo
2016-01-01
The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects (u) that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model (u) plus an extra component, f, that captures random effects between environments that were not captured by the random effects u. We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u and f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u. PMID:27793970
Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models
Directory of Open Access Journals (Sweden)
Jaime Cuevas
2017-01-01
Full Text Available The phenomenon of genotype × environment (G × E interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects ( u that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP and Gaussian (Gaussian kernel, GK. The other model has the same genetic component as the first model ( u plus an extra component, f, that captures random effects between environments that were not captured by the random effects u . We used five CIMMYT data sets (one maize and four wheat that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u and f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u .
Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models.
Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A; Burgueño, Juan; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo
2017-01-05
The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects [Formula: see text] that can be assessed by the Kronecker product of variance-covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model [Formula: see text] plus an extra component, F: , that captures random effects between environments that were not captured by the random effects [Formula: see text] We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with [Formula: see text] over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect [Formula: see text]. Copyright © 2017 Cuevas et al.
Stochastic models for predicting pitting corrosion damage of HLRW containers
International Nuclear Information System (INIS)
Henshall, G.A.
1991-10-01
Stochastic models for predicting aqueous pitting corrosion damage of high-level radioactive-waste containers are described. These models could be used to predict the time required for the first pit to penetrate a container and the increase in the number of breaches at later times, both of which would be useful in the repository system performance analysis. Monte Carlo implementations of the stochastic models are described, and predictions of induction time, survival probability and pit depth distributions are presented. These results suggest that the pit nucleation probability decreases with exposure time and that pit growth may be a stochastic process. The advantages and disadvantages of the stochastic approach, methods for modeling the effects of environment, and plans for future work are discussed
Prediction of the flooding process at the Ronneburg site - results of an integrated approach
International Nuclear Information System (INIS)
Paul, M.; Saenger, H.-J.; Snagowski, S.; Maerten, H.; Eckart, M.
1998-01-01
The flooding process of the Ronneburg uranium mine (WISMUT) was initiated at the turn of the year 1997 to 1998. In order to prepare the flooding process and to derive and optimize technological measures an integrated modelling approach was chosen which includes several coupled modules. The most important issues to be answered are: (1) prediction of the flooding time (2) prediction of the groundwater level at the post-flooding stage, assessment of amount, location and quality of flooding waters entering the receiving streams at the final stage (3) water quality prediction within the mine during the flooding process (4) definition of technological measures and assessment of their efficiency A box model which includes the three-dimensional distribution of the cavity volume in the mine represents the model core. The model considers the various types of dewatered cavity volumes for each mine level / mining field and the degree of vertical and horizontal connection between the mining fields. Different types of open mine space as well as the dewatered geological pore and joint volume are considered taking into account the contour of the depression cone prior to flooding and the characteristics of the different rock types. Based on the mine water balance and the flooding technology the model predicts the rise of the water table over time during the flooding process for each mine field separately. In order to predict the mine water quality and the efficiency of in-situ water treatment the box model was linked to a geochemical model (PHREEQC). A three-dimensional flow model is used to evaluate the post-flooding situation at the Ronneburg site. This model is coupled to the box model. The modelling results of various flooding scenarios show that a prediction of the post-flooding geohydraulic situation is possible despite of uncertainties concerning the input parameters which still exist. The post-flooding water table in the central part of the Ronneburg mine will be 270 m
Econometric models for predicting confusion crop ratios
Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)
1979-01-01
Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.
Ground Motion Prediction Models for Caucasus Region
Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino
2016-04-01
Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.
Modeling and Prediction of Krueger Device Noise
Guo, Yueping; Burley, Casey L.; Thomas, Russell H.
2016-01-01
This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.
Prediction of Chemical Function: Model Development and ...
The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi
Evaluating Predictive Models of Software Quality
Ciaschini, V.; Canaparo, M.; Ronchieri, E.; Salomoni, D.
2014-06-01
Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.
Predicting FLDs Using a Multiscale Modeling Scheme
Wu, Z.; Loy, C.; Wang, E.; Hegadekatte, V.
2017-09-01
The measurement of a single forming limit diagram (FLD) requires significant resources and is time consuming. We have developed a multiscale modeling scheme to predict FLDs using a combination of limited laboratory testing, crystal plasticity (VPSC) modeling, and dual sequential-stage finite element (ABAQUS/Explicit) modeling with the Marciniak-Kuczynski (M-K) criterion to determine the limit strain. We have established a means to work around existing limitations in ABAQUS/Explicit by using an anisotropic yield locus (e.g., BBC2008) in combination with the M-K criterion. We further apply a VPSC model to reduce the number of laboratory tests required to characterize the anisotropic yield locus. In the present work, we show that the predicted FLD is in excellent agreement with the measured FLD for AA5182 in the O temper. Instead of 13 different tests as for a traditional FLD determination within Novelis, our technique uses just four measurements: tensile properties in three orientations; plane strain tension; biaxial bulge; and the sheet crystallographic texture. The turnaround time is consequently far less than for the traditional laboratory measurement of the FLD.
Evaluating predictive models of software quality
International Nuclear Information System (INIS)
Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D
2014-01-01
Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.
Gamma-Ray Pulsars Models and Predictions
Harding, A K
2001-01-01
Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...
Engineering Glass Passivation Layers -Model Results
Energy Technology Data Exchange (ETDEWEB)
Skorski, Daniel C.; Ryan, Joseph V.; Strachan, Denis M.; Lepry, William C.
2011-08-08
The immobilization of radioactive waste into glass waste forms is a baseline process of nuclear waste management not only in the United States, but worldwide. The rate of radionuclide release from these glasses is a critical measure of the quality of the waste form. Over long-term tests and using extrapolations of ancient analogues, it has been shown that well designed glasses exhibit a dissolution rate that quickly decreases to a slow residual rate for the lifetime of the glass. The mechanistic cause of this decreased corrosion rate is a subject of debate, with one of the major theories suggesting that the decrease is caused by the formation of corrosion products in such a manner as to present a diffusion barrier on the surface of the glass. Although there is much evidence of this type of mechanism, there has been no attempt to engineer the effect to maximize the passivating qualities of the corrosion products. This study represents the first attempt to engineer the creation of passivating phases on the surface of glasses. Our approach utilizes interactions between the dissolving glass and elements from the disposal environment to create impermeable capping layers. By drawing from other corrosion studies in areas where passivation layers have been successfully engineered to protect the bulk material, we present here a report on mineral phases that are likely have a morphological tendency to encrust the surface of the glass. Our modeling has focused on using the AFCI glass system in a carbonate, sulfate, and phosphate rich environment. We evaluate the minerals predicted to form to determine the likelihood of the formation of a protective layer on the surface of the glass. We have also modeled individual ions in solutions vs. pH and the addition of aluminum and silicon. These results allow us to understand the pH and ion concentration dependence of mineral formation. We have determined that iron minerals are likely to form a complete incrustation layer and we plan
A model for predicting lung cancer response to therapy
International Nuclear Information System (INIS)
Seibert, Rebecca M.; Ramsey, Chester R.; Hines, J. Wesley; Kupelian, Patrick A.; Langen, Katja M.; Meeks, Sanford L.; Scaperoth, Daniel D.
2007-01-01
Purpose: Volumetric computed tomography (CT) images acquired by image-guided radiation therapy (IGRT) systems can be used to measure tumor response over the course of treatment. Predictive adaptive therapy is a novel treatment technique that uses volumetric IGRT data to actively predict the future tumor response to therapy during the first few weeks of IGRT treatment. The goal of this study was to develop and test a model for predicting lung tumor response during IGRT treatment using serial megavoltage CT (MVCT). Methods and Materials: Tumor responses were measured for 20 lung cancer lesions in 17 patients that were imaged and treated with helical tomotherapy with doses ranging from 2.0 to 2.5 Gy per fraction. Five patients were treated with concurrent chemotherapy, and 1 patient was treated with neoadjuvant chemotherapy. Tumor response to treatment was retrospectively measured by contouring 480 serial MVCT images acquired before treatment. A nonparametric, memory-based locally weight regression (LWR) model was developed for predicting tumor response using the retrospective tumor response data. This model predicts future tumor volumes and the associated confidence intervals based on limited observations during the first 2 weeks of treatment. The predictive accuracy of the model was tested using a leave-one-out cross-validation technique with the measured tumor responses. Results: The predictive algorithm was used to compare predicted verse-measured tumor volume response for all 20 lesions. The average error for the predictions of the final tumor volume was 12%, with the true volumes always bounded by the 95% confidence interval. The greatest model uncertainty occurred near the middle of the course of treatment, in which the tumor response relationships were more complex, the model has less information, and the predictors were more varied. The optimal days for measuring the tumor response on the MVCT images were on elapsed Days 1, 2, 5, 9, 11, 12, 17, and 18 during
Results from the IAEA benchmark of spallation models
International Nuclear Information System (INIS)
Leray, S.; David, J.C.; Khandaker, M.; Mank, G.; Mengoni, A.; Otsuka, N.; Filges, D.; Gallmeier, F.; Konobeyev, A.; Michel, R.
2011-01-01
Spallation reactions play an important role in a wide domain of applications. In the simulation codes used in this field, the nuclear interaction cross-sections and characteristics are computed by spallation models. The International Atomic Energy Agency (IAEA) has recently organised a benchmark of the spallation models used or that could be used in the future into high-energy transport codes. The objectives were, first, to assess the prediction capabilities of the different spallation models for the different mass and energy regions and the different exit channels and, second, to understand the reason for the success or deficiency of the models. Results of the benchmark concerning both the analysis of the prediction capabilities of the models and the first conclusions on the physics of spallation models are presented. (authors)
Plant water potential improves prediction of empirical stomatal models.
Directory of Open Access Journals (Sweden)
William R L Anderegg
Full Text Available Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.
An analytical model for climatic predictions
International Nuclear Information System (INIS)
Njau, E.C.
1990-12-01
A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs
Key Questions in Building Defect Prediction Models in Practice
Ramler, Rudolf; Wolfmaier, Klaus; Stauder, Erwin; Kossak, Felix; Natschläger, Thomas
The information about which modules of a future version of a software system are defect-prone is a valuable planning aid for quality managers and testers. Defect prediction promises to indicate these defect-prone modules. However, constructing effective defect prediction models in an industrial setting involves a number of key questions. In this paper we discuss ten key questions identified in context of establishing defect prediction in a large software development project. Seven consecutive versions of the software system have been used to construct and validate defect prediction models for system test planning. Furthermore, the paper presents initial empirical results from the studied project and, by this means, contributes answers to the identified questions.
A deep auto-encoder model for gene expression prediction.
Xie, Rui; Wen, Jia; Quitadamo, Andrew; Cheng, Jianlin; Shi, Xinghua
2017-11-17
Gene expression is a key intermediate level that genotypes lead to a particular trait. Gene expression is affected by various factors including genotypes of genetic variants. With an aim of delineating the genetic impact on gene expression, we build a deep auto-encoder model to assess how good genetic variants will contribute to gene expression changes. This new deep learning model is a regression-based predictive model based on the MultiLayer Perceptron and Stacked Denoising Auto-encoder (MLP-SAE). The model is trained using a stacked denoising auto-encoder for feature selection and a multilayer perceptron framework for backpropagation. We further improve the model by introducing dropout to prevent overfitting and improve performance. To demonstrate the usage of this model, we apply MLP-SAE to a real genomic datasets with genotypes and gene expression profiles measured in yeast. Our results show that the MLP-SAE model with dropout outperforms other models including Lasso, Random Forests and the MLP-SAE model without dropout. Using the MLP-SAE model with dropout, we show that gene expression quantifications predicted by the model solely based on genotypes, align well with true gene expression patterns. We provide a deep auto-encoder model for predicting gene expression from SNP genotypes. This study demonstrates that deep learning is appropriate for tackling another genomic problem, i.e., building predictive models to understand genotypes' contribution to gene expression. With the emerging availability of richer genomic data, we anticipate that deep learning models play a bigger role in modeling and interpreting genomics.
Predicting water main failures using Bayesian model averaging and survival modelling approach
International Nuclear Information System (INIS)
Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan
2015-01-01
To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure
PVT characterization and viscosity modeling and prediction of crude oils
DEFF Research Database (Denmark)
Cisneros, Eduardo Salvador P.; Dalberg, Anders; Stenby, Erling Halfdan
2004-01-01
In previous works, the general, one-parameter friction theory (f-theory), models have been applied to the accurate viscosity modeling of reservoir fluids. As a base, the f-theory approach requires a compositional characterization procedure for the application of an equation of state (EOS), in most...... pressure, is also presented. The combination of the mass characterization scheme presented in this work and the f-theory, can also deliver accurate viscosity modeling results. Additionally, depending on how extensive the compositional characterization is, the approach,presented in this work may also...... deliver accurate viscosity predictions. The modeling approach presented in this work can deliver accurate viscosity and density modeling and prediction results over wide ranges of reservoir conditions, including the compositional changes induced by recovery processes such as gas injection....
SHMF: Interest Prediction Model with Social Hub Matrix Factorization
Directory of Open Access Journals (Sweden)
Chaoyuan Cui
2017-01-01
Full Text Available With the development of social networks, microblog has become the major social communication tool. There is a lot of valuable information such as personal preference, public opinion, and marketing in microblog. Consequently, research on user interest prediction in microblog has a positive practical significance. In fact, how to extract information associated with user interest orientation from the constantly updated blog posts is not so easy. Existing prediction approaches based on probabilistic factor analysis use blog posts published by user to predict user interest. However, these methods are not very effective for the users who post less but browse more. In this paper, we propose a new prediction model, which is called SHMF, using social hub matrix factorization. SHMF constructs the interest prediction model by combining the information of blogs posts published by both user and direct neighbors in user’s social hub. Our proposed model predicts user interest by integrating user’s historical behavior and temporal factor as well as user’s friendships, thus achieving accurate forecasts of user’s future interests. The experimental results on Sina Weibo show the efficiency and effectiveness of our proposed model.
Three-model ensemble wind prediction in southern Italy
Torcasio, Rosa Claudia; Federico, Stefano; Calidonna, Claudia Roberta; Avolio, Elenio; Drofa, Oxana; Landi, Tony Christian; Malguzzi, Piero; Buzzi, Andrea; Bonasoni, Paolo
2016-03-01
Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013) three-model ensemble (TME) experiment for wind prediction is considered. The models employed, run operationally at National Research Council - Institute of Atmospheric Sciences and Climate (CNR-ISAC), are RAMS (Regional Atmospheric Modelling System), BOLAM (BOlogna Limited Area Model), and MOLOCH (MOdello LOCale in H coordinates). The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System). Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System) of the ECMWF (European Centre for Medium-Range Weather Forecast) for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.
Predictive modeling of coral disease distribution within a reef system.
Directory of Open Access Journals (Sweden)
Gareth J Williams
2010-02-01
Full Text Available Diseases often display complex and distinct associations with their environment due to differences in etiology, modes of transmission between hosts, and the shifting balance between pathogen virulence and host resistance. Statistical modeling has been underutilized in coral disease research to explore the spatial patterns that result from this triad of interactions. We tested the hypotheses that: 1 coral diseases show distinct associations with multiple environmental factors, 2 incorporating interactions (synergistic collinearities among environmental variables is important when predicting coral disease spatial patterns, and 3 modeling overall coral disease prevalence (the prevalence of multiple diseases as a single proportion value will increase predictive error relative to modeling the same diseases independently. Four coral diseases: Porites growth anomalies (PorGA, Porites tissue loss (PorTL, Porites trematodiasis (PorTrem, and Montipora white syndrome (MWS, and their interactions with 17 predictor variables were modeled using boosted regression trees (BRT within a reef system in Hawaii. Each disease showed distinct associations with the predictors. Environmental predictors showing the strongest overall associations with the coral diseases were both biotic and abiotic. PorGA was optimally predicted by a negative association with turbidity, PorTL and MWS by declines in butterflyfish and juvenile parrotfish abundance respectively, and PorTrem by a modal relationship with Porites host cover. Incorporating interactions among predictor variables contributed to the predictive power of our models, particularly for PorTrem. Combining diseases (using overall disease prevalence as the model response, led to an average six-fold increase in cross-validation predictive deviance over modeling the diseases individually. We therefore recommend coral diseases to be modeled separately, unless known to have etiologies that respond in a similar manner to
Robust Model Predictive Control of a Wind Turbine
DEFF Research Database (Denmark)
Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik
2012-01-01
In this work the problem of robust model predictive control (robust MPC) of a wind turbine in the full load region is considered. A minimax robust MPC approach is used to tackle the problem. Nonlinear dynamics of the wind turbine are derived by combining blade element momentum (BEM) theory...... of the uncertain system is employed and a norm-bounded uncertainty model is used to formulate a minimax model predictive control. The resulting optimization problem is simplified by semidefinite relaxation and the controller obtained is applied on a full complexity, high fidelity wind turbine model. Finally...... and first principle modeling of the turbine flexible structure. Thereafter the nonlinear model is linearized using Taylor series expansion around system operating points. Operating points are determined by effective wind speed and an extended Kalman filter (EKF) is employed to estimate this. In addition...
International Nuclear Information System (INIS)
Ojiri, Hiroya; Mendenhall, William M.; Stringer, Scott P.; Johnson, Patches L.; Mancuso, Anthony A.
2002-01-01
Purpose: To establish whether the extent of neck disease on postradiation therapy (RT) computed tomography (CT) can predict the likelihood of positive neck nodes and, thereby, the necessity of planned post-RT neck dissection. Methods and Materials: Ninety-five patients who underwent post-RT neck dissection within 2 months for squamous cell carcinoma of the head and neck were eligible. Of the 95 patients, 37 (32.7%) of 113 hemineck specimens were pathologically positive. On post-RT CT imaging studies, the number and size of lymph nodes >1 cm were recorded. Internal focal defects and the likelihood of extracapsular spread were graded. Results: If lymph nodes on post-RT CT were ≤15 mm, free of significant internal focal low-attenuation or calcification, and without imaging evidence of extracapsular spread, the surgical hemineck specimen was positive in 1 (3.4%) of the 29 hemineck specimens. A focal low-attenuation defect (p=0.0078) and evidence of extracapsular spread (p=0.0721) seen in the residual nodal mass on CT were independent predictors of a positive surgical specimen by multivariate analysis. Conclusion: CT findings on post-RT neck studies can help predict the likelihood of residual disease and, thereby, the necessity of planned post-RT neck dissection
Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests
Energy Technology Data Exchange (ETDEWEB)
Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-07
The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.
A Predictive Maintenance Model for Railway Tracks
DEFF Research Database (Denmark)
Li, Rui; Wen, Min; Salling, Kim Bang
2015-01-01
presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time......). Five technical and economic aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality...
Predictive Capability Maturity Model for computational modeling and simulation.
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.
2007-10-01
The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.
Effective modelling for predictive analytics in data science ...
African Journals Online (AJOL)
Effective modelling for predictive analytics in data science. ... the nearabsence of empirical or factual predictive analytics in the mainstream research going on ... Keywords: Predictive Analytics, Big Data, Business Intelligence, Project Planning.
Hierarchical Neural Regression Models for Customer Churn Prediction
Directory of Open Access Journals (Sweden)
Golshan Mohammadi
2013-01-01
Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.
Enhancing pavement performance prediction models for the Illinois Tollway System
Directory of Open Access Journals (Sweden)
Laxmikanth Premkumar
2016-01-01
Full Text Available Accurate pavement performance prediction represents an important role in prioritizing future maintenance and rehabilitation needs, and predicting future pavement condition in a pavement management system. The Illinois State Toll Highway Authority (Tollway with over 2000 lane miles of pavement utilizes the condition rating survey (CRS methodology to rate pavement performance. Pavement performance models developed in the past for the Illinois Department of Transportation (IDOT are used by the Tollway to predict the future condition of its network. The model projects future CRS ratings based on pavement type, thickness, traffic, pavement age and current CRS rating. However, with time and inclusion of newer pavement types there was a need to calibrate the existing pavement performance models, as well as, develop models for newer pavement types.This study presents the results of calibrating the existing models, and developing new models for the various pavement types in the Illinois Tollway network. The predicted future condition of the pavements is used in estimating its remaining service life to failure, which is of immediate use in recommending future maintenance and rehabilitation requirements for the network. Keywords: Pavement performance models, Remaining life, Pavement management
Predicting nucleic acid binding interfaces from structural models of proteins.
Dror, Iris; Shazman, Shula; Mukherjee, Srayanta; Zhang, Yang; Glaser, Fabian; Mandel-Gutfreund, Yael
2012-02-01
The function of DNA- and RNA-binding proteins can be inferred from the characterization and accurate prediction of their binding interfaces. However, the main pitfall of various structure-based methods for predicting nucleic acid binding function is that they are all limited to a relatively small number of proteins for which high-resolution three-dimensional structures are available. In this study, we developed a pipeline for extracting functional electrostatic patches from surfaces of protein structural models, obtained using the I-TASSER protein structure predictor. The largest positive patches are extracted from the protein surface using the patchfinder algorithm. We show that functional electrostatic patches extracted from an ensemble of structural models highly overlap the patches extracted from high-resolution structures. Furthermore, by testing our pipeline on a set of 55 known nucleic acid binding proteins for which I-TASSER produces high-quality models, we show that the method accurately identifies the nucleic acids binding interface on structural models of proteins. Employing a combined patch approach we show that patches extracted from an ensemble of models better predicts the real nucleic acid binding interfaces compared with patches extracted from independent models. Overall, these results suggest that combining information from a collection of low-resolution structural models could be a valuable approach for functional annotation. We suggest that our method will be further applicable for predicting other functional surfaces of proteins with unknown structure. Copyright © 2011 Wiley Periodicals, Inc.
Improving Saliency Models by Predicting Human Fixation Patches
Dubey, Rachit
2015-04-16
There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.
Improving Saliency Models by Predicting Human Fixation Patches
Dubey, Rachit; Dave, Akshat; Ghanem, Bernard
2015-01-01
There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.
Model Predictive Control of the Hybrid Ventilation for Livestock
DEFF Research Database (Denmark)
Wu, Zhuang; Stoustrup, Jakob; Trangbæk, Klaus
2006-01-01
In this paper, design and simulation results of Model Predictive Control (MPC) strategy for livestock hybrid ventilation systems and associated indoor climate through variable valve openings and exhaust fans are presented. The design is based on thermal comfort parameters for poultry in barns...
Atmospheric modelling for seasonal prediction at the CSIR
CSIR Research Space (South Africa)
Landman, WA
2014-10-01
Full Text Available re-forecasts) made at lead-times which are the result of forcing the CCAM with predicted SST (while the sea-ice remains specified as climatological values) in order to determine how the model can be expected to perform under real-time operational...
Combining GPS measurements and IRI model predictions
International Nuclear Information System (INIS)
Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.
2002-01-01
The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations
Mathematical models for indoor radon prediction
International Nuclear Information System (INIS)
Malanca, A.; Pessina, V.; Dallara, G.
1995-01-01
It is known that the indoor radon (Rn) concentration can be predicted by means of mathematical models. The simplest model relies on two variables only: the Rn source strength and the air exchange rate. In the Lawrence Berkeley Laboratory (LBL) model several environmental parameters are combined into a complex equation; besides, a correlation between the ventilation rate and the Rn entry rate from the soil is admitted. The measurements were carried out using activated carbon canisters. Seventy-five measurements of Rn concentrations were made inside two rooms placed on the second floor of a building block. One of the rooms had a single-glazed window whereas the other room had a double pane window. During three different experimental protocols, the mean Rn concentration was always higher into the room with a double-glazed window. That behavior can be accounted for by the simplest model. A further set of 450 Rn measurements was collected inside a ground-floor room with a grounding well in it. This trend maybe accounted for by the LBL model
Towards predictive models for transitionally rough surfaces
Abderrahaman-Elena, Nabil; Garcia-Mayoral, Ricardo
2017-11-01
We analyze and model the previously presented decomposition for flow variables in DNS of turbulence over transitionally rough surfaces. The flow is decomposed into two contributions: one produced by the overlying turbulence, which has no footprint of the surface texture, and one induced by the roughness, which is essentially the time-averaged flow around the surface obstacles, but modulated in amplitude by the first component. The roughness-induced component closely resembles the laminar steady flow around the roughness elements at the same non-dimensional roughness size. For small - yet transitionally rough - textures, the roughness-free component is essentially the same as over a smooth wall. Based on these findings, we propose predictive models for the onset of the transitionally rough regime. Project supported by the Engineering and Physical Sciences Research Council (EPSRC).
PEEX Modelling Platform for Seamless Environmental Prediction
Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku
2017-04-01
The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.
Resource-estimation models and predicted discovery
International Nuclear Information System (INIS)
Hill, G.W.
1982-01-01
Resources have been estimated by predictive extrapolation from past discovery experience, by analogy with better explored regions, or by inference from evidence of depletion of targets for exploration. Changes in technology and new insights into geological mechanisms have occurred sufficiently often in the long run to form part of the pattern of mature discovery experience. The criterion, that a meaningful resource estimate needs an objective measure of its precision or degree of uncertainty, excludes 'estimates' based solely on expert opinion. This is illustrated by development of error measures for several persuasive models of discovery and production of oil and gas in USA, both annually and in terms of increasing exploration effort. Appropriate generalizations of the models resolve many points of controversy. This is illustrated using two USA data sets describing discovery of oil and of U 3 O 8 ; the latter set highlights an inadequacy of available official data. Review of the oil-discovery data set provides a warrant for adjusting the time-series prediction to a higher resource figure for USA petroleum. (author)
Study on prediction model of irradiation embrittlement for reactor pressure vessel steel
International Nuclear Information System (INIS)
Wang Rongshan; Xu Chaoliang; Huang Ping; Liu Xiangbing; Ren Ai; Chen Jun; Li Chengliang
2014-01-01
The study on prediction model of irradiation embrittlement for reactor pres- sure vessel (RPV) steel is an important method for long term operation. According to the deep analysis of the previous prediction models developed worldwide, the drawbacks of these models were given and a new irradiation embrittlement prediction model PMIE-2012 was developed. A corresponding reliability assessment was carried out by irradiation surveillance data. The assessment results show that the PMIE-2012 have a high reliability and accuracy on irradiation embrittlement prediction. (authors)
Predictions of titanium alloy properties using thermodynamic modeling tools
Zhang, F.; Xie, F.-Y.; Chen, S.-L.; Chang, Y. A.; Furrer, D.; Venkatesh, V.
2005-12-01
Thermodynamic modeling tools have become essential in understanding the effect of alloy chemistry on the final microstructure of a material. Implementation of such tools to improve titanium processing via parameter optimization has resulted in significant cost savings through the elimination of shop/laboratory trials and tests. In this study, a thermodynamic modeling tool developed at CompuTherm, LLC, is being used to predict β transus, phase proportions, phase chemistries, partitioning coefficients, and phase boundaries of multicomponent titanium alloys. This modeling tool includes Pandat, software for multicomponent phase equilibrium calculations, and PanTitanium, a thermodynamic database for titanium alloys. Model predictions are compared with experimental results for one α-β alloy (Ti-64) and two near-β alloys (Ti-17 and Ti-10-2-3). The alloying elements, especially the interstitial elements O, N, H, and C, have been shown to have a significant effect on the β transus temperature, and are discussed in more detail herein.
Predicting turns in proteins with a unified model.
Directory of Open Access Journals (Sweden)
Qi Song
Full Text Available MOTIVATION: Turns are a critical element of the structure of a protein; turns play a crucial role in loops, folds, and interactions. Current prediction methods are well developed for the prediction of individual turn types, including α-turn, β-turn, and γ-turn, etc. However, for further protein structure and function prediction it is necessary to develop a uniform model that can accurately predict all types of turns simultaneously. RESULTS: In this study, we present a novel approach, TurnP, which offers the ability to investigate all the turns in a protein based on a unified model. The main characteristics of TurnP are: (i using newly exploited features of structural evolution information (secondary structure and shape string of protein based on structure homologies, (ii considering all types of turns in a unified model, and (iii practical capability of accurate prediction of all turns simultaneously for a query. TurnP utilizes predicted secondary structures and predicted shape strings, both of which have greater accuracy, based on innovative technologies which were both developed by our group. Then, sequence and structural evolution features, which are profile of sequence, profile of secondary structures and profile of shape strings are generated by sequence and structure alignment. When TurnP was validated on a non-redundant dataset (4,107 entries by five-fold cross-validation, we achieved an accuracy of 88.8% and a sensitivity of 71.8%, which exceeded the most state-of-the-art predictors of certain type of turn. Newly determined sequences, the EVA and CASP9 datasets were used as independent tests and the results we achieved were outstanding for turn predictions and confirmed the good performance of TurnP for practical applications.
Predictive modeling of pedestal structure in KSTAR using EPED model
Energy Technology Data Exchange (ETDEWEB)
Han, Hyunsun; Kim, J. Y. [National Fusion Research Institute, Daejeon 305-806 (Korea, Republic of); Kwon, Ohjin [Department of Physics, Daegu University, Gyeongbuk 712-714 (Korea, Republic of)
2013-10-15
A predictive calculation is given for the structure of edge pedestal in the H-mode plasma of the KSTAR (Korea Superconducting Tokamak Advanced Research) device using the EPED model. Particularly, the dependence of pedestal width and height on various plasma parameters is studied in detail. The two codes, ELITE and HELENA, are utilized for the stability analysis of the peeling-ballooning and kinetic ballooning modes, respectively. Summarizing the main results, the pedestal slope and height have a strong dependence on plasma current, rapidly increasing with it, while the pedestal width is almost independent of it. The plasma density or collisionality gives initially a mild stabilization, increasing the pedestal slope and height, but above some threshold value its effect turns to a destabilization, reducing the pedestal width and height. Among several plasma shape parameters, the triangularity gives the most dominant effect, rapidly increasing the pedestal width and height, while the effect of elongation and squareness appears to be relatively weak. Implication of these edge results, particularly in relation to the global plasma performance, is discussed.
FIRE BEHAVIOR PREDICTING MODELS EFFICIENCY IN BRAZILIAN COMMERCIAL EUCALYPT PLANTATIONS
Directory of Open Access Journals (Sweden)
Benjamin Leonardo Alves White
2016-12-01
Full Text Available Knowing how a wildfire will behave is extremely important in order to assist in fire suppression and prevention operations. Since the 1940’s mathematical models to estimate how the fire will behave have been developed worldwide, however, none of them, until now, had their efficiency tested in Brazilian commercial eucalypt plantations nor in other vegetation types in the country. This study aims to verify the accuracy of the Rothermel (1972 fire spread model, the Byram (1959 flame length model, and the fire spread and length equations derived from the McArthur (1962 control burn meters. To meet these objectives, 105 experimental laboratory fires were done and their results compared with the predicted values from the models tested. The Rothermel and Byram models predicted better than McArthur’s, nevertheless, all of them underestimated the fire behavior aspects evaluated and were statistically different from the experimental data.
Modeling and prediction of flotation performance using support vector regression
Directory of Open Access Journals (Sweden)
Despotović Vladimir
2017-01-01
Full Text Available Continuous efforts have been made in recent year to improve the process of paper recycling, as it is of critical importance for saving the wood, water and energy resources. Flotation deinking is considered to be one of the key methods for separation of ink particles from the cellulose fibres. Attempts to model the flotation deinking process have often resulted in complex models that are difficult to implement and use. In this paper a model for prediction of flotation performance based on Support Vector Regression (SVR, is presented. Representative data samples were created in laboratory, under a variety of practical control variables for the flotation deinking process, including different reagents, pH values and flotation residence time. Predictive model was created that was trained on these data samples, and the flotation performance was assessed showing that Support Vector Regression is a promising method even when dataset used for training the model is limited.
An Operational Model for the Prediction of Jet Blast
2012-01-09
This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...
A Novel Exercise Thermophysiology Comfort Prediction Model with Fuzzy Logic
Directory of Open Access Journals (Sweden)
Nan Jia
2016-01-01
Full Text Available Participation in a regular exercise program can improve health status and contribute to an increase in life expectancy. However, exercise accidents like dehydration, exertional heatstroke, syncope, and even sudden death exist. If these accidents can be analyzed or predicted before they happen, it will be beneficial to alleviate or avoid uncomfortable or unacceptable human disease. Therefore, an exercise thermophysiology comfort prediction model is needed. In this paper, coupling the thermal interactions among human body, clothing, and environment (HCE as well as the human body physiological properties, a human thermophysiology regulatory model is designed to enhance the human thermophysiology simulation in the HCE system. Some important thermal and physiological performances can be simulated. According to the simulation results, a human exercise thermophysiology comfort prediction method based on fuzzy inference system is proposed. The experiment results show that there is the same prediction trend between the experiment result and simulation result about thermophysiology comfort. At last, a mobile application platform for human exercise comfort prediction is designed and implemented.
In silico modeling to predict drug-induced phospholipidosis
International Nuclear Information System (INIS)
Choi, Sydney S.; Kim, Jae S.; Valerio, Luis G.; Sadrieh, Nakissa
2013-01-01
Drug-induced phospholipidosis (DIPL) is a preclinical finding during pharmaceutical drug development that has implications on the course of drug development and regulatory safety review. A principal characteristic of drugs inducing DIPL is known to be a cationic amphiphilic structure. This provides evidence for a structure-based explanation and opportunity to analyze properties and structures of drugs with the histopathologic findings for DIPL. In previous work from the FDA, in silico quantitative structure–activity relationship (QSAR) modeling using machine learning approaches has shown promise with a large dataset of drugs but included unconfirmed data as well. In this study, we report the construction and validation of a battery of complementary in silico QSAR models using the FDA's updated database on phospholipidosis, new algorithms and predictive technologies, and in particular, we address high performance with a high-confidence dataset. The results of our modeling for DIPL include rigorous external validation tests showing 80–81% concordance. Furthermore, the predictive performance characteristics include models with high sensitivity and specificity, in most cases above ≥ 80% leading to desired high negative and positive predictivity. These models are intended to be utilized for regulatory toxicology applied science needs in screening new drugs for DIPL. - Highlights: • New in silico models for predicting drug-induced phospholipidosis (DIPL) are described. • The training set data in the models is derived from the FDA's phospholipidosis database. • We find excellent predictivity values of the models based on external validation. • The models can support drug screening and regulatory decision-making on DIPL
Predictive modeling of mosquito abundance and dengue transmission in Kenya
Caldwell, J.; Krystosik, A.; Mutuku, F.; Ndenga, B.; LaBeaud, D.; Mordecai, E.
2017-12-01
Approximately 390 million people are exposed to dengue virus every year, and with no widely available treatments or vaccines, predictive models of disease risk are valuable tools for vector control and disease prevention. The aim of this study was to modify and improve climate-driven predictive models of dengue vector abundance (Aedes spp. mosquitoes) and viral transmission to people in Kenya. We simulated disease transmission using a temperature-driven mechanistic model and compared model predictions with vector trap data for larvae, pupae, and adult mosquitoes collected between 2014 and 2017 at four sites across urban and rural villages in Kenya. We tested predictive capacity of our models using four temperature measurements (minimum, maximum, range, and anomalies) across daily, weekly, and monthly time scales. Our results indicate seasonal temperature variation is a key driving factor of Aedes mosquito abundance and disease transmission. These models can help vector control programs target specific locations and times when vectors are likely to be present, and can be modified for other Aedes-transmitted diseases and arboviral endemic regions around the world.
Prediction of high level vibration test results by use of available inelastic analysis techniques
International Nuclear Information System (INIS)
Hofmayer, C.H.; Park, Y.J.; Costello, J.F.
1991-01-01
As part of a cooperative study between the United States and Japan, the US Nuclear Regulatory Commission and the Ministry of International Trade and Industry of Japan agreed to perform a test program that would subject a large scale piping model to significant plastic strains under excitation conditions much greater than the design condition for nuclear power plants. The objective was to compare the results of the tests with state-of-the-art analyses. Comparisons were done at different excitation levels from elastic to elastic-plastic to levels where cracking was induced in the test model. The program was called the high Level Vibration Test (HLVT). The HLVT was performed on the seismic table at the Tadotsu Engineering Laboratory of Nuclear Power Engineering Test Center in Japan. The test model was constructed by modifying the 1/2.5 scale model of one loop of a PWR primary coolant system which was previously tested by NUPEC as part of their seismic proving test program. A comparison of various analysis techniques with test results shows a higher prediction error in the detailed strain values than in the overall response values. This prediction error is magnified as the plasticity in the test model increases. There is no significant difference in the peak responses between the simplified and the detailed analyses. A comparison between various detailed finite element model runs indicates that the material properties and plasticity modeling have a significant impact on the plastic strain responses under dynamic loading reversals. 5 refs., 12 figs
Data driven propulsion system weight prediction model
Gerth, Richard J.
1994-10-01
The objective of the research was to develop a method to predict the weight of paper engines, i.e., engines that are in the early stages of development. The impetus for the project was the Single Stage To Orbit (SSTO) project, where engineers need to evaluate alternative engine designs. Since the SSTO is a performance driven project the performance models for alternative designs were well understood. The next tradeoff is weight. Since it is known that engine weight varies with thrust levels, a model is required that would allow discrimination between engines that produce the same thrust. Above all, the model had to be rooted in data with assumptions that could be justified based on the data. The general approach was to collect data on as many existing engines as possible and build a statistical model of the engines weight as a function of various component performance parameters. This was considered a reasonable level to begin the project because the data would be readily available, and it would be at the level of most paper engines, prior to detailed component design.
Predictive modeling of emergency cesarean delivery.
Directory of Open Access Journals (Sweden)
Carlos Campillo-Artero
Full Text Available To increase discriminatory accuracy (DA for emergency cesarean sections (ECSs.We prospectively collected data on and studied all 6,157 births occurring in 2014 at four public hospitals located in three different autonomous communities of Spain. To identify risk factors (RFs for ECS, we used likelihood ratios and logistic regression, fitted a classification tree (CTREE, and analyzed a random forest model (RFM. We used the areas under the receiver-operating-characteristic (ROC curves (AUCs to assess their DA.The magnitude of the LR+ for all putative individual RFs and ORs in the logistic regression models was low to moderate. Except for parity, all putative RFs were positively associated with ECS, including hospital fixed-effects and night-shift delivery. The DA of all logistic models ranged from 0.74 to 0.81. The most relevant RFs (pH, induction, and previous C-section in the CTREEs showed the highest ORs in the logistic models. The DA of the RFM and its most relevant interaction terms was even higher (AUC = 0.94; 95% CI: 0.93-0.95.Putative fetal, maternal, and contextual RFs alone fail to achieve reasonable DA for ECS. It is the combination of these RFs and the interactions between them at each hospital that make it possible to improve the DA for the type of delivery and tailor interventions through prediction to improve the appropriateness of ECS indications.
International Nuclear Information System (INIS)
Collin, Blaise P.; Petti, David A.; Demkowicz, Paul A.; Maki, John T.
2014-01-01
The PARFUME (PARticle FUel ModEl) code was used to predict fission product release from tristructural isotropic (TRISO) coated fuel particles and compacts during the first irradiation experiment (AGR-1) of the Advanced Gas Reactor Fuel Development and Qualification program. The PARFUME model for the AGR-1 experiment used the fuel compact volume average temperature for each of the 620 days of irradiation to calculate the release of fission products silver, cesium, and strontium from a representative particle for a select number of AGR-1 compacts. Post-irradiation examination (PIE) measurements provided data on release of fission products from fuel compacts and fuel particles, and retention of fission products in the compacts outside of the silicon carbide (SiC) layer. PARFUME-predicted fractional release of these fission products was determined and compared to the PIE measurements. Results show an overall over-prediction of the fractional release of cesium by PARFUME. For particles with failed SiC layers, the over-prediction is by a factor of about two, corresponding to an over-estimation of the diffusivity in uranium oxycarbide (UCO) by a factor of about 100. For intact particles, whose release is much lower, the over-prediction is by an average of about an order of magnitude, which could additionally be attributed to an over-estimated diffusivity in SiC by about 30%. The release of strontium from intact particles is also over-estimated by PARFUME, which also points towards an over-estimated diffusivity of strontium in either SiC or UCO, or possibly both. The measured strontium fractional release from intact particles varied considerably from compact to compact, making it difficult to assess the effective over-estimation of the diffusivities. Furthermore, the release of strontium from particles with failed SiC is difficult to observe experimentally due to the release from intact particles, preventing any conclusions to be made on the accuracy or validity of the
Predictive model for convective flows induced by surface reactivity contrast
Davidson, Scott M.; Lammertink, Rob G. H.; Mani, Ali
2018-05-01
Concentration gradients in a fluid adjacent to a reactive surface due to contrast in surface reactivity generate convective flows. These flows result from contributions by electro- and diffusio-osmotic phenomena. In this study, we have analyzed reactive patterns that release and consume protons, analogous to bimetallic catalytic conversion of peroxide. Similar systems have typically been studied using either scaling analysis to predict trends or costly numerical simulation. Here, we present a simple analytical model, bridging the gap in quantitative understanding between scaling relations and simulations, to predict the induced potentials and consequent velocities in such systems without the use of any fitting parameters. Our model is tested against direct numerical solutions to the coupled Poisson, Nernst-Planck, and Stokes equations. Predicted slip velocities from the model and simulations agree to within a factor of ≈2 over a multiple order-of-magnitude change in the input parameters. Our analysis can be used to predict enhancement of mass transport and the resulting impact on overall catalytic conversion, and is also applicable to predicting the speed of catalytic nanomotors.
Model Predictive Control based on Finite Impulse Response Models
DEFF Research Database (Denmark)
Prasath, Guru; Jørgensen, John Bagterp
2008-01-01
We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...... and related to the uncertainty of the impulse response coefficients. The simulations can be used to benchmark l2 MPC against FIR based robust MPC as well as to estimate the maximum performance improvements by robust MPC....
Modeling and Prediction of Soil Water Vapor Sorption Isotherms
DEFF Research Database (Denmark)
Arthur, Emmanuel; Tuller, Markus; Moldrup, Per
2015-01-01
Soil water vapor sorption isotherms describe the relationship between water activity (aw) and moisture content along adsorption and desorption paths. The isotherms are important for modeling numerous soil processes and are also used to estimate several soil (specific surface area, clay content.......93) for a wide range of soils; and (ii) develop and test regression models for estimating the isotherms from clay content. Preliminary results show reasonable fits of the majority of the investigated empirical and theoretical models to the measured data although some models were not capable to fit both sorption...... directions accurately. Evaluation of the developed prediction equations showed good estimation of the sorption/desorption isotherms for tested soils....
Depeursinge, Adrien; Kurtz, Camille; Beaulieu, Christopher; Napel, Sandy; Rubin, Daniel
2014-08-01
We describe a framework to model visual semantics of liver lesions in CT images in order to predict the visual semantic terms (VST) reported by radiologists in describing these lesions. Computational models of VST are learned from image data using linear combinations of high-order steerable Riesz wavelets and support vector machines (SVM). In a first step, these models are used to predict the presence of each semantic term that describes liver lesions. In a second step, the distances between all VST models are calculated to establish a nonhierarchical computationally-derived ontology of VST containing inter-term synonymy and complementarity. A preliminary evaluation of the proposed framework was carried out using 74 liver lesions annotated with a set of 18 VSTs from the RadLex ontology. A leave-one-patient-out cross-validation resulted in an average area under the ROC curve of 0.853 for predicting the presence of each VST. The proposed framework is expected to foster human-computer synergies for the interpretation of radiological images while using rotation-covariant computational models of VSTs to 1) quantify their local likelihood and 2) explicitly link them with pixel-based image content in the context of a given imaging domain.
Some Results on Mean Square Error for Factor Score Prediction
Krijnen, Wim P.
2006-01-01
For the confirmatory factor model a series of inequalities is given with respect to the mean square error (MSE) of three main factor score predictors. The eigenvalues of these MSE matrices are a monotonic function of the eigenvalues of the matrix gamma[subscript rho] = theta[superscript 1/2] lambda[subscript rho] 'psi[subscript rho] [superscript…
Energy Technology Data Exchange (ETDEWEB)
Collin, Blaise P.; Demkowicz, Paul A.; Baldwin, Charles A.; Harp, Jason M.; Hunn, John D.
2016-11-01
The PARFUME (PARticle FUel ModEl) code was used to predict silver release from tristructural isotropic (TRISO) coated fuel particles and compacts during the second irradiation experiment (AGR-2) of the Advanced Gas Reactor Fuel Development and Qualification program. The PARFUME model for the AGR-2 experiment used the fuel compact volume average temperature for each of the 559 days of irradiation to calculate the release of fission product silver from a representative particle for a select number of AGR-2 compacts and individual fuel particles containing either mixed uranium carbide/oxide (UCO) or 100% uranium dioxide (UO2) kernels. Post-irradiation examination (PIE) measurements were performed to provide data on release of silver from these compacts and individual fuel particles. The available experimental fractional releases of silver were compared to their corresponding PARFUME predictions. Preliminary comparisons show that PARFUME under-predicts the PIE results in UCO compacts and is in reasonable agreement with experimental data for UO2 compacts. The accuracy of PARFUME predictions is impacted by the code limitations in the modeling of the temporal and spatial distributions of the temperature across the compacts. Nevertheless, the comparisons on silver release lie within the same order of magnitude.
Methodology for Designing Models Predicting Success of Infertility Treatment
Alireza Zarinara; Mohammad Mahdi Akhondi; Hojjat Zeraati; Koorsh Kamali; Kazem Mohammad
2016-01-01
Abstract Background: The prediction models for infertility treatment success have presented since 25 years ago. There are scientific principles for designing and applying the prediction models that is also used to predict the success rate of infertility treatment. The purpose of this study is to provide basic principles for designing the model to predic infertility treatment success. Materials and Methods: In this paper, the principles for developing predictive models are explained and...
New tips for structure prediction by comparative modeling
Rayan, Anwar
2009-01-01
Comparative modelling is utilized to predict the 3-dimensional conformation of a given protein (target) based on its sequence alignment to experimentally determined protein structure (template). The use of such technique is already rewarding and increasingly widespread in biological research and drug development. The accuracy of the predictions as commonly accepted depends on the score of sequence identity of the target protein to the template. To assess the relationship between sequence identity and model quality, we carried out an analysis of a set of 4753 sequence and structure alignments. Throughout this research, the model accuracy was measured by root mean square deviations of Cα atoms of the target-template structures. Surprisingly, the results show that sequence identity of the target protein to the template is not a good descriptor to predict the accuracy of the 3-D structure model. However, in a large number of cases, comparative modelling with lower sequence identity of target to template proteins led to more accurate 3-D structure model. As a consequence of this study, we suggest new tips for improving the quality of omparative models, particularly for models whose target-template sequence identity is below 50%. PMID:19255646
Evolutionary neural network modeling for software cumulative failure time prediction
International Nuclear Information System (INIS)
Tian Liang; Noore, Afzel
2005-01-01
An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches
The EURAD model: Design and first results
International Nuclear Information System (INIS)
1989-01-01
The contributions are abridged versions of lectures delivered on the occasion of the presentation meeting of the EURAD project on the 20th and 21st of February 1989 in Cologne. EURAD stands for European Acid Deposition Model. The project takes one of the possible and necessary ways to search for scientific answers to the questions which the modifications of the atmosphere caused by anthropogenic influence raise. One of the objectives is to develop a realistic numeric model of long-distance transport of harmful substances in the troposphere over Europe and to use this model for the investigation of pollutant distribution but also for the support of their experimental study. The EURAD Model consists of two parts: a meteorologic mesoscale model and a chemical transport model. In the first part of the presentation, these parts are introduced and questions concerning the implementation of the entire model on the computer system CRAY X-MP/22 discussed. Afterwards it is reported upon the results of the test calculations for the cases 'Chernobyl' and 'Alpex'. Thereafter selected problems concerning the treatments of meteorological and air-chemistry processes as well as the parametrization of subscale processes within the model are discussed. The conclusion is made by two lectures upon emission evaluations and emission scenarios. (orig./KW) [de
New Temperature-based Models for Predicting Global Solar Radiation
International Nuclear Information System (INIS)
Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.
2016-01-01
Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for
A physiological production model for cacao : results of model simulations
Zuidema, P.A.; Leffelaar, P.A.
2002-01-01
CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.
Revised predictive equations for salt intrusion modelling in estuaries
Gisen, J.I.A.; Savenije, H.H.G.; Nijzink, R.C.
2015-01-01
For one-dimensional salt intrusion models to be predictive, we need predictive equations to link model parameters to observable hydraulic and geometric variables. The one-dimensional model of Savenije (1993b) made use of predictive equations for the Van der Burgh coefficient $K$ and the dispersion
Ensemble ecosystem modeling for predicting ecosystem response to predator reintroduction.
Baker, Christopher M; Gordon, Ascelin; Bode, Michael
2017-04-01
Introducing a new or extirpated species to an ecosystem is risky, and managers need quantitative methods that can predict the consequences for the recipient ecosystem. Proponents of keystone predator reintroductions commonly argue that the presence of the predator will restore ecosystem function, but this has not always been the case, and mathematical modeling has an important role to play in predicting how reintroductions will likely play out. We devised an ensemble modeling method that integrates species interaction networks and dynamic community simulations and used it to describe the range of plausible consequences of 2 keystone-predator reintroductions: wolves (Canis lupus) to Yellowstone National Park and dingoes (Canis dingo) to a national park in Australia. Although previous methods for predicting ecosystem responses to such interventions focused on predicting changes around a given equilibrium, we used Lotka-Volterra equations to predict changing abundances through time. We applied our method to interaction networks for wolves in Yellowstone National Park and for dingoes in Australia. Our model replicated the observed dynamics in Yellowstone National Park and produced a larger range of potential outcomes for the dingo network. However, we also found that changes in small vertebrates or invertebrates gave a good indication about the potential future state of the system. Our method allowed us to predict when the systems were far from equilibrium. Our results showed that the method can also be used to predict which species may increase or decrease following a reintroduction and can identify species that are important to monitor (i.e., species whose changes in abundance give extra insight into broad changes in the system). Ensemble ecosystem modeling can also be applied to assess the ecosystem-wide implications of other types of interventions including assisted migration, biocontrol, and invasive species eradication. © 2016 Society for Conservation Biology.
Neutrino nucleosynthesis in supernovae: Shell model predictions
International Nuclear Information System (INIS)
Haxton, W.C.
1989-01-01
Almost all of the 3 · 10 53 ergs liberated in a core collapse supernova is radiated as neutrinos by the cooling neutron star. I will argue that these neutrinos interact with nuclei in the ejected shells of the supernovae to produce new elements. It appears that this nucleosynthesis mechanism is responsible for the galactic abundances of 7 Li, 11 B, 19 F, 138 La, and 180 Ta, and contributes significantly to the abundances of about 15 other light nuclei. I discuss shell model predictions for the charged and neutral current allowed and first-forbidden responses of the parent nuclei, as well as the spallation processes that produce the new elements. 18 refs., 1 fig., 1 tab
Hierarchical Model Predictive Control for Resource Distribution
DEFF Research Database (Denmark)
Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob
2010-01-01
units. The approach is inspired by smart-grid electric power production and consumption systems, where the flexibility of a large number of power producing and/or power consuming units can be exploited in a smart-grid solution. The objective is to accommodate the load variation on the grid, arising......This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... on one hand from varying consumption, on the other hand by natural variations in power production e.g. from wind turbines. The approach presented is based on quadratic optimization and possess the properties of low algorithmic complexity and of scalability. In particular, the proposed design methodology...
Distributed model predictive control made easy
Negenborn, Rudy
2014-01-01
The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems. This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...
Model predictions for auxiliary heating in spheromaks
International Nuclear Information System (INIS)
Fauler, T.K.; Khua, D.D.
1997-01-01
Calculations are presented of the plasma temperature waited for under auxiliary heating in spheromaks. A model, ensuring good agreement of earlier experiments with joule heating results, is used. The model includes heat losses due to magnetic fluctuations and shows that the plasma temperatures of the kilo-electron-volt order may be achieved in a small device with the radius of 0.3 m only
A disaggregate model to predict the intercity travel demand
Energy Technology Data Exchange (ETDEWEB)
Damodaran, S.
1988-01-01
This study was directed towards developing disaggregate models to predict the intercity travel demand in Canada. A conceptual framework for the intercity travel behavior was proposed; under this framework, a nested multinomial model structure that combined mode choice and trip generation was developed. The CTS (Canadian Travel Survey) data base was used for testing the structure and to determine the viability of using this data base for intercity travel-demand prediction. Mode-choice and trip-generation models were calibrated for four modes (auto, bus, rail and air) for both business and non-business trips. The models were linked through the inclusive value variable, also referred to as the long sum of the denominator in the literature. Results of the study indicated that the structure used in this study could be applied for intercity travel-demand modeling. However, some limitations of the data base were identified. It is believed that, with some modifications, the CTS data could be used for predicting intercity travel demand. Future research can identify the factors affecting intercity travel behavior, which will facilitate collection of useful data for intercity travel prediction and policy analysis.
Model predictive control of a wind turbine modelled in Simpack
International Nuclear Information System (INIS)
Jassmann, U; Matzke, D; Reiter, M; Abel, D; Berroth, J; Schelenz, R; Jacobs, G
2014-01-01
Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine
Model predictive control of a wind turbine modelled in Simpack
Jassmann, U.; Berroth, J.; Matzke, D.; Schelenz, R.; Reiter, M.; Jacobs, G.; Abel, D.
2014-06-01
Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine to
Model predictive control for spacecraft rendezvous in elliptical orbit
Li, Peng; Zhu, Zheng H.
2018-05-01
This paper studies the control of spacecraft rendezvous with attitude stable or spinning targets in an elliptical orbit. The linearized Tschauner-Hempel equation is used to describe the motion of spacecraft and the problem is formulated by model predictive control. The control objective is to maximize control accuracy and smoothness simultaneously to avoid unexpected change or overshoot of trajectory for safe rendezvous. It is achieved by minimizing the weighted summations of control errors and increments. The effects of two sets of horizons (control and predictive horizons) in the model predictive control are examined in terms of fuel consumption, rendezvous time and computational effort. The numerical results show the proposed control strategy is effective.
Directory of Open Access Journals (Sweden)
César Hernández-Hernández
2017-06-01
Full Text Available Electricity load forecasting, optimal power system operation and energy management play key roles that can bring significant operational advantages to microgrids. This paper studies how methods based on time series and neural networks can be used to predict energy demand and production, allowing them to be combined with model predictive control. Comparisons of different prediction methods and different optimum energy distribution scenarios are provided, permitting us to determine when short-term energy prediction models should be used. The proposed prediction models in addition to the model predictive control strategy appear as a promising solution to energy management in microgrids. The controller has the task of performing the management of electricity purchase and sale to the power grid, maximizing the use of renewable energy sources and managing the use of the energy storage system. Simulations were performed with different weather conditions of solar irradiation. The obtained results are encouraging for future practical implementation.
Construction Worker Fatigue Prediction Model Based on System Dynamic
Directory of Open Access Journals (Sweden)
Wahyu Adi Tri Joko
2017-01-01
Full Text Available Construction accident can be caused by internal and external factors such as worker fatigue and unsafe project environment. Tight schedule of construction project forcing construction worker to work overtime in long period. This situation leads to worker fatigue. This paper proposes a model to predict construction worker fatigue based on system dynamic (SD. System dynamic is used to represent correlation among internal and external factors and to simulate level of worker fatigue. To validate the model, 93 construction workers whom worked in a high rise building construction projects, were used as case study. The result shows that excessive workload, working elevation and age, are the main factors lead to construction worker fatigue. Simulation result also shows that these factors can increase worker fatigue level to 21.2% times compared to normal condition. Beside predicting worker fatigue level this model can also be used as early warning system to prevent construction worker accident
International Nuclear Information System (INIS)
Moon, Jin Woo; Yoon, Younju; Jeon, Young-Hoon; Kim, Sooyoung
2017-01-01
Highlights: • Initial ANN model was developed for predicting the time to the setback temperature. • Initial model was optimized for producing accurate output. • Optimized model proved its prediction accuracy. • ANN-based algorithms were developed and tested their performance. • ANN-based algorithms presented superior thermal comfort or energy efficiency. - Abstract: In this study, a temperature control algorithm was developed to apply a setback temperature predictively for the cooling system of a residential building during occupied periods by residents. An artificial neural network (ANN) model was developed to determine the required time for increasing the current indoor temperature to the setback temperature. This study involved three phases: development of the initial ANN-based prediction model, optimization and testing of the initial model, and development and testing of three control algorithms. The development and performance testing of the model and algorithm were conducted using TRNSYS and MATLAB. Through the development and optimization process, the final ANN model employed indoor temperature and the temperature difference between the current and target setback temperature as two input neurons. The optimal number of hidden layers, number of neurons, learning rate, and moment were determined to be 4, 9, 0.6, and 0.9, respectively. The tangent–sigmoid and pure-linear transfer function was used in the hidden and output neurons, respectively. The ANN model used 100 training data sets with sliding-window method for data management. Levenberg-Marquart training method was employed for model training. The optimized model had a prediction accuracy of 0.9097 root mean square errors when compared with the simulated results. Employing the ANN model, ANN-based algorithms maintained indoor temperatures better within target ranges. Compared to the conventional algorithm, the ANN-based algorithms reduced the duration of time, in which the indoor temperature
The development of U. S. soil erosion prediction and modeling
Directory of Open Access Journals (Sweden)
John M. Laflen
2013-09-01
Full Text Available Soil erosion prediction technology began over 70 years ago when Austin Zingg published a relationship between soil erosion (by water and land slope and length, followed shortly by a relationship by Dwight Smith that expanded this equation to include conservation practices. But, it was nearly 20 years before this work's expansion resulted in the Universal Soil Loss Equation (USLE, perhaps the foremost achievement in soil erosion prediction in the last century. The USLE has increased in application and complexity, and its usefulness and limitations have led to the development of additional technologies and new science in soil erosion research and prediction. Main among these new technologies is the Water Erosion Prediction Project (WEPP model, which has helped to overcome many of the shortcomings of the USLE, and increased the scale over which erosion by water can be predicted. Areas of application of erosion prediction include almost all land types: urban, rural, cropland, forests, rangeland, and construction sites. Specialty applications of WEPP include prediction of radioactive material movement with soils at a superfund cleanup site, and near real-time daily estimation of soil erosion for the entire state of Iowa.
Predicted continuum spectra of type II supernovae - LTE results
Shaviv, G.; Wehrse, R.; Wagoner, R. V.
1985-01-01
The continuum spectral energy distribution of the flux emerging from type II supernovae is calculated from quasi-static radiative transfer through a power-law density gradient, assuming radiative equilibrium and LTE. It is found that the Balmer jump disappears at high effective temperatures and low densities, while the spectrum resembles that of a dilute blackbody but is flatter with a sharper cutoff at the short-wavelength end. A significant UV excess is found in all models calculated. The calculation should be considered exploratory because of significant effects which are anticipated to arise from departure from LTE.
PNN-based Rockburst Prediction Model and Its Applications
Directory of Open Access Journals (Sweden)
Yu Zhou
2017-07-01
Full Text Available Rock burst is one of main engineering geological problems significantly threatening the safety of construction. Prediction of rock burst is always an important issue concerning the safety of workers and equipment in tunnels. In this paper, a novel PNN-based rock burst prediction model is proposed to determine whether rock burst will happen in the underground rock projects and how much the intensity of rock burst is. The probabilistic neural network (PNN is developed based on Bayesian criteria of multivariate pattern classification. Because PNN has the advantages of low training complexity, high stability, quick convergence, and simple construction, it can be well applied in the prediction of rock burst. Some main control factors, such as rocks’ maximum tangential stress, rocks’ uniaxial compressive strength, rocks’ uniaxial tensile strength, and elastic energy index of rock are chosen as the characteristic vector of PNN. PNN model is obtained through training data sets of rock burst samples which come from underground rock project in domestic and abroad. Other samples are tested with the model. The testing results agree with the practical records. At the same time, two real-world applications are used to verify the proposed method. The results of prediction are same as the results of existing methods, just same as what happened in the scene, which verifies the effectiveness and applicability of our proposed work.
INTRAVAL test case 1b - modelling results
International Nuclear Information System (INIS)
Jakob, A.; Hadermann, J.
1991-07-01
This report presents results obtained within Phase I of the INTRAVAL study. Six different models are fitted to the results of four infiltration experiments with 233 U tracer on small samples of crystalline bore cores originating from deep drillings in Northern Switzerland. Four of these are dual porosity media models taking into account advection and dispersion in water conducting zones (either tubelike veins or planar fractures), matrix diffusion out of these into pores of the solid phase, and either non-linear or linear sorption of the tracer onto inner surfaces. The remaining two are equivalent porous media models (excluding matrix diffusion) including either non-linear sorption onto surfaces of a single fissure family or linear sorption onto surfaces of several different fissure families. The fits to the experimental data have been carried out by Marquardt-Levenberg procedure yielding error estimates of the parameters, correlation coefficients and also, as a measure for the goodness of the fits, the minimum values of the χ 2 merit function. The effects of different upstream boundary conditions are demonstrated and the penetration depth for matrix diffusion is discussed briefly for both alternative flow path scenarios. The calculations show that the dual porosity media models are significantly more appropriate to the experimental data than the single porosity media concepts. Moreover, it is matrix diffusion rather than the non-linearity of the sorption isotherm which is responsible for the tailing part of the break-through curves. The extracted parameter values for some models for both the linear and non-linear (Freundlich) sorption isotherms are consistent with the results of independent static batch sorption experiments. From the fits, it is generally not possible to discriminate between the two alternative flow path geometries. On the basis of the modelling results, some proposals for further experiments are presented. (author) 15 refs., 23 figs., 7 tabs
Characteristics of prompt fission gamma-ray emission - Experimental results and predictions
International Nuclear Information System (INIS)
Oberstedt, Andreas; Billnert, Robert; Oberstedt, Stephan
2014-01-01
Systematics from 2001, describing prompt fission gamma-ray spectra (PFGS) characteristics as function of mass and atomic number of the fissioning system, has been revisited and parameters have been revised based on recent experimental results. Although originally expressed for spontaneous and thermal neutron-induced fission, validity for fast neutrons was assumed and applied to predict PFGS characteristics for the reaction n + 238 U up to incident neutron energies of E n = 20 MeV. The results from this work are in good agreement with corresponding results from both model calculations and experiments. (authors)
Modeling pitting growth data and predicting degradation trend
International Nuclear Information System (INIS)
Viglasky, T.; Awad, R.; Zeng, Z.; Riznic, J.
2007-01-01
A non-statistical modeling approach to predict material degradation is presented in this paper. In this approach, the original data series is processed using Accumulated Generating Operation (AGO). With the aid of the AGO which weakens the random fluctuation embedded in the data series, an approximately exponential curve is established. The generated data series described by the exponential curve is then modeled by a differential equation. The coefficients of the differential equation can be deduced by approximate difference formula based on least-squares algorithm. By solving the differential equation and processing an inverse AGO, a predictive model can be obtained. As this approach is not established on the basis of statistics, the prediction can be performed with a limited amount of data. Implementation of this approach is demonstrated by predicting the pitting growth rate in specimens and wear trend in steam generator tubes. The analysis results indicate that this approach provides a powerful tool with reasonable precision to predict material degradation. (author)
Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance
Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.
2014-01-01
This paper presents recent thermal model results of the Advanced Stirling Radioisotope Generator (ASRG). The three-dimensional (3D) ASRG thermal power model was built using the Thermal Desktop(trademark) thermal analyzer. The model was correlated with ASRG engineering unit test data and ASRG flight unit predictions from Lockheed Martin's (LM's) I-deas(trademark) TMG thermal model. The auxiliary cooling system (ACS) of the ASRG is also included in the ASRG thermal model. The ACS is designed to remove waste heat from the ASRG so that it can be used to heat spacecraft components. The performance of the ACS is reported under nominal conditions and during a Venus flyby scenario. The results for the nominal case are validated with data from Lockheed Martin. Transient thermal analysis results of ASRG for a Venus flyby with a representative trajectory are also presented. In addition, model results of an ASRG mounted on a Cassini-like spacecraft with a sunshade are presented to show a way to mitigate the high temperatures of a Venus flyby. It was predicted that the sunshade can lower the temperature of the ASRG alternator by 20 C for the representative Venus flyby trajectory. The 3D model also was modified to predict generator performance after a single Advanced Stirling Convertor failure. The geometry of the Microtherm HT insulation block on the outboard side was modified to match deformation and shrinkage observed during testing of a prototypic ASRG test fixture by LM. Test conditions and test data were used to correlate the model by adjusting the thermal conductivity of the deformed insulation to match the post-heat-dump steady state temperatures. Results for these conditions showed that the performance of the still-functioning inboard ACS was unaffected.
Predictive integrated modelling for ITER scenarios
International Nuclear Information System (INIS)
Artaud, J.F.; Imbeaux, F.; Aniel, T.; Basiuk, V.; Eriksson, L.G.; Giruzzi, G.; Hoang, G.T.; Huysmans, G.; Joffrin, E.; Peysson, Y.; Schneider, M.; Thomas, P.
2005-01-01
The uncertainty on the prediction of ITER scenarios is evaluated. 2 transport models which have been extensively validated against the multi-machine database are used for the computation of the transport coefficients. The first model is GLF23, the second called Kiauto is a model in which the profile of dilution coefficient is a gyro Bohm-like analytical function, renormalized in order to get profiles consistent with a given global energy confinement scaling. The package of codes CRONOS is used, it gives access to the dynamics of the discharge and allows the study of interplay between heat transport, current diffusion and sources. The main motivation of this work is to study the influence of parameters such plasma current, heat, density, impurities and toroidal moment transport. We can draw the following conclusions: 1) the target Q = 10 can be obtained in ITER hybrid scenario at I p = 13 MA, using either the DS03 two terms scaling or the GLF23 model based on the same pedestal; 2) I p = 11.3 MA, Q = 10 can be reached only assuming a very peaked pressure profile and a low pedestal; 3) at fixed Greenwald fraction, Q increases with density peaking; 4) achieving a stationary q-profile with q > 1 requires a large non-inductive current fraction (80%) that could be provided by 20 to 40 MW of LHCD; and 5) owing to the high temperature the q-profile penetration is delayed and q = 1 is reached about 600 s in ITER hybrid scenario at I p = 13 MA, in the absence of active q-profile control. (A.C.)
Candido Dos Reis, Francisco J; Wishart, Gordon C; Dicks, Ed M; Greenberg, David; Rashbass, Jem; Schmidt, Marjanka K; van den Broek, Alexandra J; Ellis, Ian O; Green, Andrew; Rakha, Emad; Maishman, Tom; Eccles, Diana M; Pharoah, Paul D P
2017-05-22
PREDICT is a breast cancer prognostic and treatment benefit model implemented online. The overall fit of the model has been good in multiple independent case series, but PREDICT has been shown to underestimate breast cancer specific mortality in women diagnosed under the age of 40. Another limitation is the use of discrete categories for tumour size and node status resulting in 'step' changes in risk estimates on moving between categories. We have refitted the PREDICT prognostic model using the original cohort of cases from East Anglia with updated survival time in order to take into account age at diagnosis and to smooth out the survival function for tumour size and node status. Multivariable Cox regression models were used to fit separate models for ER negative and ER positive disease. Continuous variables were fitted using fractional polynomials and a smoothed baseline hazard was obtained by regressing the baseline cumulative hazard for each patients against time using fractional polynomials. The fit of the prognostic models were then tested in three independent data sets that had also been used to validate the original version of PREDICT. In the model fitting data, after adjusting for other prognostic variables, there is an increase in risk of breast cancer specific mortality in younger and older patients with ER positive disease, with a substantial increase in risk for women diagnosed before the age of 35. In ER negative disease the risk increases slightly with age. The association between breast cancer specific mortality and both tumour size and number of positive nodes was non-linear with a more marked increase in risk with increasing size and increasing number of nodes in ER positive disease. The overall calibration and discrimination of the new version of PREDICT (v2) was good and comparable to that of the previous version in both model development and validation data sets. However, the calibration of v2 improved over v1 in patients diagnosed under the age
Energy Technology Data Exchange (ETDEWEB)
Aleksandrov, B.L.; Esipko, O.A.; Dakhkilgov, T.D.
1981-12-01
Results of statistical processing of the data of prediction of pore pressures in the course of well sinking, according to the material of oil field and geophysical investigations in different areas, are presented. Likewise, the errors of pressure prediction, their causes, geological models of series with anomalously high formation pressure, and methods for prediction of pore and formation pressures under different geological conditions are considered. 12 refs.
Results on three predictions for July 2012 federal elections in Mexico based on past regularities.
Directory of Open Access Journals (Sweden)
H Hernández-Saldaña
Full Text Available The Presidential Election in Mexico of July 2012 has been the third time that PREP, Previous Electoral Results Program works. PREP gives voting outcomes based in electoral certificates of each polling station that arrive to capture centers. In previous ones, some statistical regularities had been observed, three of them were selected to make predictions and were published in arXiv:1207.0078 [physics.soc-ph]. Using the database made public in July 2012, two of the predictions were completely fulfilled, while, the third one was measured and confirmed using the database obtained upon request to the electoral authorities. The first two predictions confirmed by actual measures are: (ii The Partido Revolucionario Institucional, PRI, is a sprinter and has a better performance in polling stations arriving late to capture centers during the process. (iii Distribution of vote of this party is well described by a smooth function named a Daisy model. A Gamma distribution, but compatible with a Daisy model, fits the distribution as well. The third prediction confirms that errare humanum est, since the error distributions of all the self-consistency variables appeared as a central power law with lateral lobes as in 2000 and 2006 electoral processes. The three measured regularities appeared no matter the political environment.
Results on three predictions for July 2012 federal elections in Mexico based on past regularities.
Hernández-Saldaña, H
2013-01-01
The Presidential Election in Mexico of July 2012 has been the third time that PREP, Previous Electoral Results Program works. PREP gives voting outcomes based in electoral certificates of each polling station that arrive to capture centers. In previous ones, some statistical regularities had been observed, three of them were selected to make predictions and were published in arXiv:1207.0078 [physics.soc-ph]. Using the database made public in July 2012, two of the predictions were completely fulfilled, while, the third one was measured and confirmed using the database obtained upon request to the electoral authorities. The first two predictions confirmed by actual measures are: (ii) The Partido Revolucionario Institucional, PRI, is a sprinter and has a better performance in polling stations arriving late to capture centers during the process. (iii) Distribution of vote of this party is well described by a smooth function named a Daisy model. A Gamma distribution, but compatible with a Daisy model, fits the distribution as well. The third prediction confirms that errare humanum est, since the error distributions of all the self-consistency variables appeared as a central power law with lateral lobes as in 2000 and 2006 electoral processes. The three measured regularities appeared no matter the political environment.
Validating predictions from climate envelope models.
Directory of Open Access Journals (Sweden)
James I Watling
Full Text Available Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species' distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967-1971 (t1 and evaluated using occurrence data from 1998-2002 (t2. Model sensitivity (the ability to correctly classify species presences was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on
Validating predictions from climate envelope models
Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.
2013-01-01
Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.
Rate-Based Model Predictive Control of Turbofan Engine Clearance
DeCastro, Jonathan A.
2006-01-01
An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.
Formability prediction for AHSS materials using damage models
Amaral, R.; Santos, Abel D.; José, César de Sá; Miranda, Sara
2017-05-01
Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches.
Formability prediction for AHSS materials using damage models
International Nuclear Information System (INIS)
Amaral, R.; Miranda, Sara; Santos, Abel D.; José, César de Sá
2017-01-01
Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches. (paper)
Comprehensive fluence model for absolute portal dose image prediction
International Nuclear Information System (INIS)
Chytyk, K.; McCurdy, B. M. C.
2009-01-01
Amorphous silicon (a-Si) electronic portal imaging devices (EPIDs) continue to be investigated as treatment verification tools, with a particular focus on intensity modulated radiation therapy (IMRT). This verification could be accomplished through a comparison of measured portal images to predicted portal dose images. A general fluence determination tailored to portal dose image prediction would be a great asset in order to model the complex modulation of IMRT. A proposed physics-based parameter fluence model was commissioned by matching predicted EPID images to corresponding measured EPID images of multileaf collimator (MLC) defined fields. The two-source fluence model was composed of a focal Gaussian and an extrafocal Gaussian-like source. Specific aspects of the MLC and secondary collimators were also modeled (e.g., jaw and MLC transmission factors, MLC rounded leaf tips, tongue and groove effect, interleaf leakage, and leaf offsets). Several unique aspects of the model were developed based on the results of detailed Monte Carlo simulations of the linear accelerator including (1) use of a non-Gaussian extrafocal fluence source function, (2) separate energy spectra used for focal and extrafocal fluence, and (3) different off-axis energy spectra softening used for focal and extrafocal fluences. The predicted energy fluence was then convolved with Monte Carlo generated, EPID-specific dose kernels to convert incident fluence to dose delivered to the EPID. Measured EPID data were obtained with an a-Si EPID for various MLC-defined fields (from 1x1 to 20x20 cm 2 ) over a range of source-to-detector distances. These measured profiles were used to determine the fluence model parameters in a process analogous to the commissioning of a treatment planning system. The resulting model was tested on 20 clinical IMRT plans, including ten prostate and ten oropharyngeal cases. The model predicted the open-field profiles within 2%, 2 mm, while a mean of 96.6% of pixels over all
Nonlinear Model Predictive Control with Constraint Satisfactions for a Quadcopter
Wang, Ye; Ramirez-Jaime, Andres; Xu, Feng; Puig, Vicenç
2017-01-01
This paper presents a nonlinear model predictive control (NMPC) strategy combined with constraint satisfactions for a quadcopter. The full dynamics of the quadcopter describing the attitude and position are nonlinear, which are quite sensitive to changes of inputs and disturbances. By means of constraint satisfactions, partial nonlinearities and modeling errors of the control-oriented model of full dynamics can be transformed into the inequality constraints. Subsequently, the quadcopter can be controlled by an NMPC controller with the updated constraints generated by constraint satisfactions. Finally, the simulation results applied to a quadcopter simulator are provided to show the effectiveness of the proposed strategy.
Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.
Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen
2014-02-01
The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.
Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda
2018-01-01
This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.
Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda
2018-05-01
This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.
Ground Motion Prediction Model Using Artificial Neural Network
Dhanya, J.; Raghukanth, S. T. G.
2018-03-01
This article focuses on developing a ground motion prediction equation based on artificial neural network (ANN) technique for shallow crustal earthquakes. A hybrid technique combining genetic algorithm and Levenberg-Marquardt technique is used for training the model. The present model is developed to predict peak ground velocity, and 5% damped spectral acceleration. The input parameters for the prediction are moment magnitude ( M w), closest distance to rupture plane ( R rup), shear wave velocity in the region ( V s30) and focal mechanism ( F). A total of 13,552 ground motion records from 288 earthquakes provided by the updated NGA-West2 database released by Pacific Engineering Research Center are utilized to develop the model. The ANN architecture considered for the model consists of 192 unknowns including weights and biases of all the interconnected nodes. The performance of the model is observed to be within the prescribed error limits. In addition, the results from the study are found to be comparable with the existing relations in the global database. The developed model is further demonstrated by estimating site-specific response spectra for Shimla city located in Himalayan region.
Technical note: A linear model for predicting δ13 Cprotein.
Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M
2015-08-01
Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2) = 0.86, P analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.
Predictive modeling of liquid-sodium thermal–hydraulics experiments and computations
International Nuclear Information System (INIS)
Arslan, Erkan; Cacuci, Dan G.
2014-01-01
Highlights: • We applied the predictive modeling method of Cacuci and Ionescu-Bujor (2010). • We assimilated data from sodium flow experiments. • We used computational fluid dynamics simulations of sodium experiments. • The predictive modeling method greatly reduced uncertainties in predicted results. - Abstract: This work applies the predictive modeling procedure formulated by Cacuci and Ionescu-Bujor (2010) to assimilate data from liquid-sodium thermal–hydraulics experiments in order to reduce systematically the uncertainties in the predictions of computational fluid dynamics (CFD) simulations. The predicted CFD-results for the best-estimate model parameters and results describing sodium-flow velocities and temperature distributions are shown to be significantly more precise than the original computations and experiments, in that the predicted uncertainties for the best-estimate results and model parameters are significantly smaller than both the originally computed and the experimental uncertainties
Development of estrogen receptor beta binding prediction model using large sets of chemicals.
Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao
2017-11-03
We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .
Estimating Predictive Variance for Statistical Gas Distribution Modelling
International Nuclear Information System (INIS)
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-01-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
High Precision Clock Bias Prediction Model in Clock Synchronization System
Directory of Open Access Journals (Sweden)
Zan Liu
2016-01-01
Full Text Available Time synchronization is a fundamental requirement for many services provided by a distributed system. Clock calibration through the time signal is the usual way to realize the synchronization among the clocks used in the distributed system. The interference to time signal transmission or equipment failures may bring about failure to synchronize the time. To solve this problem, a clock bias prediction module is paralleled in the clock calibration system. And for improving the precision of clock bias prediction, the first-order grey model with one variable (GM(1,1 model is proposed. In the traditional GM(1,1 model, the combination of parameters determined by least squares criterion is not optimal; therefore, the particle swarm optimization (PSO is used to optimize GM(1,1 model. At the same time, in order to avoid PSO getting stuck at local optimization and improve its efficiency, the mechanisms that double subgroups and nonlinear decreasing inertia weight are proposed. In order to test the precision of the improved model, we design clock calibration experiments, where time signal is transferred via radio and wired channel, respectively. The improved model is built on the basis of clock bias acquired in the experiments. The results show that the improved model is superior to other models both in precision and in stability. The precision of improved model increased by 66.4%~76.7%.
Results of the Marine Ice Sheet Model Intercomparison Project, MISMIP
Directory of Open Access Journals (Sweden)
F. Pattyn
2012-05-01
Full Text Available Predictions of marine ice-sheet behaviour require models that are able to robustly simulate grounding line migration. We present results of an intercomparison exercise for marine ice-sheet models. Verification is effected by comparison with approximate analytical solutions for flux across the grounding line using simplified geometrical configurations (no lateral variations, no effects of lateral buttressing. Unique steady state grounding line positions exist for ice sheets on a downward sloping bed, while hysteresis occurs across an overdeepened bed, and stable steady state grounding line positions only occur on the downward-sloping sections. Models based on the shallow ice approximation, which does not resolve extensional stresses, do not reproduce the approximate analytical results unless appropriate parameterizations for ice flux are imposed at the grounding line. For extensional-stress resolving "shelfy stream" models, differences between model results were mainly due to the choice of spatial discretization. Moving grid methods were found to be the most accurate at capturing grounding line evolution, since they track the grounding line explicitly. Adaptive mesh refinement can further improve accuracy, including fixed grid models that generally perform poorly at coarse resolution. Fixed grid models, with nested grid representations of the grounding line, are able to generate accurate steady state positions, but can be inaccurate over transients. Only one full-Stokes model was included in the intercomparison, and consequently the accuracy of shelfy stream models as approximations of full-Stokes models remains to be determined in detail, especially during transients.
Predictive Models for Photovoltaic Electricity Production in Hot Weather Conditions
Directory of Open Access Journals (Sweden)
Jabar H. Yousif
2017-07-01
Full Text Available The process of finding a correct forecast equation for photovoltaic electricity production from renewable sources is an important matter, since knowing the factors affecting the increase in the proportion of renewable energy production and reducing the cost of the product has economic and scientific benefits. This paper proposes a mathematical model for forecasting energy production in photovoltaic (PV panels based on a self-organizing feature map (SOFM model. The proposed model is compared with other models, including the multi-layer perceptron (MLP and support vector machine (SVM models. Moreover, a mathematical model based on a polynomial function for fitting the desired output is proposed. Different practical measurement methods are used to validate the findings of the proposed neural and mathematical models such as mean square error (MSE, mean absolute error (MAE, correlation (R, and coefficient of determination (R2. The proposed SOFM model achieved a final MSE of 0.0007 in the training phase and 0.0005 in the cross-validation phase. In contrast, the SVM model resulted in a small MSE value equal to 0.0058, while the MLP model achieved a final MSE of 0.026 with a correlation coefficient of 0.9989, which indicates a strong relationship between input and output variables. The proposed SOFM model closely fits the desired results based on the R2 value, which is equal to 0.9555. Finally, the comparison results of MAE for the three models show that the SOFM model achieved a best result of 0.36156, whereas the SVM and MLP models yielded 4.53761 and 3.63927, respectively. A small MAE value indicates that the output of the SOFM model closely fits the actual results and predicts the desired output.
Hand Posture Prediction Using Neural Networks within a Biomechanical Model
Directory of Open Access Journals (Sweden)
Marta C. Mora
2012-10-01
Full Text Available This paper proposes the use of artificial neural networks (ANNs in the framework of a biomechanical hand model for grasping. ANNs enhance the model capabilities as they substitute estimated data for the experimental inputs required by the grasping algorithm used. These inputs are the tentative grasping posture and the most open posture during grasping. As a consequence, more realistic grasping postures are predicted by the grasping algorithm, along with the contact information required by the dynamic biomechanical model (contact points and normals. Several neural network architectures are tested and compared in terms of prediction errors, leading to encouraging results. The performance of the overall proposal is also shown through simulation, where a grasping experiment is replicated and compared to the real grasping data collected by a data glove device.
Models of Affective Decision Making: How Do Feelings Predict Choice?
Charpentier, Caroline J; De Neve, Jan-Emmanuel; Li, Xinyi; Roiser, Jonathan P; Sharot, Tali
2016-06-01
Intuitively, how you feel about potential outcomes will determine your decisions. Indeed, an implicit assumption in one of the most influential theories in psychology, prospect theory, is that feelings govern choice. Surprisingly, however, very little is known about the rules by which feelings are transformed into decisions. Here, we specified a computational model that used feelings to predict choices. We found that this model predicted choice better than existing value-based models, showing a unique contribution of feelings to decisions, over and above value. Similar to the value function in prospect theory, our feeling function showed diminished sensitivity to outcomes as value increased. However, loss aversion in choice was explained by an asymmetry in how feelings about losses and gains were weighted when making a decision, not by an asymmetry in the feelings themselves. The results provide new insights into how feelings are utilized to reach a decision. © The Author(s) 2016.
Large urban fire environment: trends and model city predictions
International Nuclear Information System (INIS)
Larson, D.A.; Small, R.D.
1983-01-01
The urban fire environment that would result from a megaton-yield nuclear weapon burst is considered. The dependence of temperatures and velocities on fire size, burning intensity, turbulence, and radiation is explored, and specific calculations for three model urban areas are presented. In all cases, high velocity fire winds are predicted. The model-city results show the influence of building density and urban sprawl on the fire environment. Additional calculations consider large-area fires with the burning intensity reduced in a blast-damaged urban center
Meteorological uncertainty of atmospheric dispersion model results (MUD)
Energy Technology Data Exchange (ETDEWEB)
Havskov Soerensen, J.; Amstrup, B.; Feddersen, H. [Danish Meteorological Institute, Copenhagen (Denmark)] [and others
2013-08-15
The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble of meteorological forecasts is produced from which uncertainties in the various meteorological parameters are estimated, e.g. probabilities for rain. Corresponding ensembles of atmospheric dispersion can now be computed from which uncertainties of predicted radionuclide concentration and deposition patterns can be derived. (Author)
Meteorological uncertainty of atmospheric dispersion model results (MUD)
International Nuclear Information System (INIS)
Havskov Soerensen, J.; Amstrup, B.; Feddersen, H.
2013-08-01
The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble of meteorological forecasts is produced from which uncertainties in the various meteorological parameters are estimated, e.g. probabilities for rain. Corresponding ensembles of atmospheric dispersion can now be computed from which uncertainties of predicted radionuclide concentration and deposition patterns can be derived. (Author)
Predicting the ungauged basin: Model validation and realism assessment
Directory of Open Access Journals (Sweden)
Tim evan Emmerik
2015-10-01
Full Text Available The hydrological decade on Predictions in Ungauged Basins (PUB led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of model outcome has not been discussed to a great extent. With this paper we aim to contribute to the discussion on how one can determine the value and validity of a hydrological model developed for an ungauged basin. As in many cases no local, or even regional, data are available, alternative methods should be applied. Using a PUB case study in a genuinely ungauged basin in southern Cambodia, we give several examples of how one can use different types of soft data to improve model design, calibrate and validate the model, and assess the realism of the model output. A rainfall-runoff model was coupled to an irrigation reservoir, allowing the use of additional and unconventional data. The model was mainly forced with remote sensing data, and local knowledge was used to constrain the parameters. Model realism assessment was done using data from surveys. This resulted in a successful reconstruction of the reservoir dynamics, and revealed the different hydrological characteristics of the two topographical classes. This paper does not present a generic approach that can be transferred to other ungauged catchments, but it aims to show how clever model design and alternative data acquisition can result in a valuable hydrological model for an ungauged catchment.
Computational neurorehabilitation: modeling plasticity and learning to predict recovery.
Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas
2016-04-30
Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.
Enhanced pid vs model predictive control applied to bldc motor
Gaya, M. S.; Muhammad, Auwal; Aliyu Abdulkadir, Rabiu; Salim, S. N. S.; Madugu, I. S.; Tijjani, Aminu; Aminu Yusuf, Lukman; Dauda Umar, Ibrahim; Khairi, M. T. M.
2018-01-01
BrushLess Direct Current (BLDC) motor is a multivariable and highly complex nonlinear system. Variation of internal parameter values with environment or reference signal increases the difficulty in controlling the BLDC effectively. Advanced control strategies (like model predictive control) often have to be integrated to satisfy the control desires. Enhancing or proper tuning of a conventional algorithm results in achieving the desired performance. This paper presents a performance comparison of Enhanced PID and Model Predictive Control (MPC) applied to brushless direct current motor. The simulation results demonstrated that the PSO-PID is slightly better than the PID and MPC in tracking the trajectory of the reference signal. The proposed scheme could be useful algorithms for the system.
Assessing Discriminative Performance at External Validation of Clinical Prediction Models.
Directory of Open Access Journals (Sweden)
Daan Nieboer
Full Text Available External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting.We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1 the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2 the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury.The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples and heterogeneous in scenario 2 (in 17%-39% of simulated samples. Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2.The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.
Nonconvex model predictive control for commercial refrigeration
Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John
2013-08-01
We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.
Decline curve based models for predicting natural gas well performance
Directory of Open Access Journals (Sweden)
Arash Kamari
2017-06-01
Full Text Available The productivity of a gas well declines over its production life as cannot cover economic policies. To overcome such problems, the production performance of gas wells should be predicted by applying reliable methods to analyse the decline trend. Therefore, reliable models are developed in this study on the basis of powerful artificial intelligence techniques viz. the artificial neural network (ANN modelling strategy, least square support vector machine (LSSVM approach, adaptive neuro-fuzzy inference system (ANFIS, and decision tree (DT method for the prediction of cumulative gas production as well as initial decline rate multiplied by time as a function of the Arps' decline curve exponent and ratio of initial gas flow rate over total gas flow rate. It was concluded that the results obtained based on the models developed in current study are in satisfactory agreement with the actual gas well production data. Furthermore, the results of comparative study performed demonstrates that the LSSVM strategy is superior to the other models investigated for the prediction of both cumulative gas production, and initial decline rate multiplied by time.
Predictive models reduce talent development costs in female gymnastics.
Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle
2017-04-01
This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.
Predictive Modelling of Heavy Metals in Urban Lakes
Lindström, Martin
2000-01-01
Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes. Sediment and water data for this study were collected from ten small lakes in the ...
Building and Verifying a Predictive Model of Interruption Resumption
2012-03-01
the gardener to remember those plants (and whether they need to be removed), and so will not commit resources to remember that information . The overall...camera), the storyteller needed help much less often. This result suggests that when there is no one to help them remember the last thing they said...INV ITED P A P E R Building and Verifying a Predictive Model of Interruption Resumption Help from a robot, to allow a human storyteller to continue
Phytoadaptation in Desert Soil Prediction Using Fuzzy Logic Modeling
S. Bouharati; F. Allag; M. Belmahdi; M. Bounechada
2014-01-01
In terms of ecology forecast effects of desertification, the purpose of this study is to develop a predictive model of growth and adaptation of species in arid environment and bioclimatic conditions. The impact of climate change and the desertification phenomena is the result of combined effects in magnitude and frequency of these phenomena. Like the data involved in the phytopathogenic process and bacteria growth in arid soil occur in an uncertain environment because of their complexity, it ...
Sjögren, Erik; Thörn, Helena; Tannergren, Christer
2016-06-06
particle size. In conclusion, it was shown that all three software packages are useful to guide formulation development. However, as a consequence of the high fraction of inaccurate predictions (prediction error >2-fold) and the clear trend toward decreased accuracy with decreased predicted fabs observed with Simcyp, the results indicate that GI-Sim and GastroPlus perform better than Simcyp in predicting the intestinal absorption of the incompletely absorbed drugs when a higher degree of accuracy is needed. In addition, this study suggests that modeling and simulation research groups should perform systematic model evaluations using their own input data to maximize confidence in model performance and output.
Theoretical model for cavitation erosion prediction in centrifugal pump impeller
International Nuclear Information System (INIS)
Rayan, M.A.; Mahgob, M.M.; Mostafa, N.H.
1990-01-01
Cavitation is known to have great effects on pump hydraulic and mechanical characteristics. These effects are mainly described by deviation in pump performance, increasing vibration and noise level as well as erosion of blade and casing materials. In the present work, only the hydrodynamic aspect of cavitation was considered. The efforts were directed toward the study of cavitation inception, cavity mechanics and material erosion in order to clarify the macrohydrodynamic aspects of cavitation erosive wear in real machines. As a result of this study, it was found that cavitation damage can be predicted from model data. The obtained theoretical results show good agreement with the experimental results obtained in this investigation and with results of some other investigations. The application of the findings of this work will help the design engineer in predicting the erosion rate, according to the different operating conditions. (author)
Scaling predictive modeling in drug development with cloud computing.
Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola
2015-01-26
Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.
Application of a predictive Bayesian model to environmental accounting.
Anex, R P; Englehardt, J D
2001-03-30
Environmental accounting techniques are intended to capture important environmental costs and benefits that are often overlooked in standard accounting practices. Environmental accounting methods themselves often ignore or inadequately represent large but highly uncertain environmental costs and costs conditioned by specific prior events. Use of a predictive Bayesian model is demonstrated for the assessment of such highly uncertain environmental and contingent costs. The predictive Bayesian approach presented generates probability distributions for the quantity of interest (rather than parameters thereof). A spreadsheet implementation of a previously proposed predictive Bayesian model, extended to represent contingent costs, is described and used to evaluate whether a firm should undertake an accelerated phase-out of its PCB containing transformers. Variability and uncertainty (due to lack of information) in transformer accident frequency and severity are assessed simultaneously using a combination of historical accident data, engineering model-based cost estimates, and subjective judgement. Model results are compared using several different risk measures. Use of the model for incorporation of environmental risk management into a company's overall risk management strategy is discussed.
Real estate value prediction using multivariate regression models
Manjula, R.; Jain, Shubham; Srivastava, Sharad; Rajiv Kher, Pranav
2017-11-01
The real estate market is one of the most competitive in terms of pricing and the same tends to vary significantly based on a lot of factors, hence it becomes one of the prime fields to apply the concepts of machine learning to optimize and predict the prices with high accuracy. Therefore in this paper, we present various important features to use while predicting housing prices with good accuracy. We have described regression models, using various features to have lower Residual Sum of Squares error. While using features in a regression model some feature engineering is required for better prediction. Often a set of features (multiple regressions) or polynomial regression (applying a various set of powers in the features) is used for making better model fit. For these models are expected to be susceptible towards over fitting ridge regression is used to reduce it. This paper thus directs to the best application of regression models in addition to other techniques to optimize the result.
Discussion of gas trade model (GTM) results
International Nuclear Information System (INIS)
Manne, A.
1989-01-01
This is in response to your invitation to comment on the structure of GTM and also upon the differences between its results and those of other models participating in EMF9. First a word upon the structure. GTM was originally designed to provide both regional and sectoral detail within the North American market for natural gas at a single point in time, e.g. the year 2000. It is a spatial equilibrium model in which a solution is obtained by maximizing a nonlinear function, the sum of consumers and producers surplus. Since transport costs are included in producers cost, this formulation automatically ensures that geographical price differentials will not differ by more than transport costs. For purposes of EMF9, GTM was modified to allow for resource development and depletion over time
Predicting future glacial lakes in Austria using different modelling approaches
Otto, Jan-Christoph; Helfricht, Kay; Prasicek, Günther; Buckel, Johannes; Keuschnig, Markus
2017-04-01
Glacier retreat is one of the most apparent consequences of temperature rise in the 20th and 21th centuries in the European Alps. In Austria, more than 240 new lakes have formed in glacier forefields since the Little Ice Age. A similar signal is reported from many mountain areas worldwide. Glacial lakes can constitute important environmental and socio-economic impacts on high mountain systems including water resource management, sediment delivery, natural hazards, energy production and tourism. Their development significantly modifies the landscape configuration and visual appearance of high mountain areas. Knowledge on the location, number and extent of these future lakes can be used to assess potential impacts on high mountain geo-ecosystems and upland-lowland interactions. Information on new lakes is critical to appraise emerging threads and potentials for society. The recent development of regional ice thickness models and their combination with high resolution glacier surface data allows predicting the topography below current glaciers by subtracting ice thickness from glacier surface. Analyzing these modelled glacier bed surfaces reveals overdeepenings that represent potential locations for future lakes. In order to predict the location of future glacial lakes below recent glaciers in the Austrian Alps we apply different ice thickness models using high resolution terrain data and glacier outlines. The results are compared and validated with ice thickness data from geophysical surveys. Additionally, we run the models on three different glacier extents provided by the Austrian Glacier Inventories from 1969, 1998 and 2006. Results of this historical glacier extent modelling are compared to existing glacier lakes and discussed focusing on geomorphological impacts on lake evolution. We discuss model performance and observed differences in the results in order to assess the approach for a realistic prediction of future lake locations. The presentation delivers
Prediction Markets and Beliefs about Climate: Results from Agent-Based Simulations
Gilligan, J. M.; John, N. J.; van der Linden, M.
2015-12-01
Climate scientists have long been frustrated by persistent doubts a large portion of the public expresses toward the scientific consensus about anthropogenic global warming. The political and ideological polarization of this doubt led Vandenbergh, Raimi, and Gilligan [1] to propose that prediction markets for climate change might influence the opinions of those who mistrust the scientific community but do trust the power of markets.We have developed an agent-based simulation of a climate prediction market in which traders buy and sell future contracts that will pay off at some future year with a value that depends on the global average temperature at that time. The traders form a heterogeneous population with different ideological positions, different beliefs about anthropogenic global warming, and different degrees of risk aversion. We also vary characteristics of the market, including the topology of social networks among the traders, the number of traders, and the completeness of the market. Traders adjust their beliefs about climate according to the gains and losses they and other traders in their social network experience. This model predicts that if global temperature is predominantly driven by greenhouse gas concentrations, prediction markets will cause traders' beliefs to converge toward correctly accepting anthropogenic warming as real. This convergence is largely independent of the structure of the market and the characteristics of the population of traders. However, it may take considerable time for beliefs to converge. Conversely, if temperature does not depend on greenhouse gases, the model predicts that traders' beliefs will not converge. We will discuss the policy-relevance of these results and more generally, the use of agent-based market simulations for policy analysis regarding climate change, seasonal agricultural weather forecasts, and other applications.[1] MP Vandenbergh, KT Raimi, & JM Gilligan. UCLA Law Rev. 61, 1962 (2014).
Seasonal predictability of Kiremt rainfall in coupled general circulation models
Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen
2017-11-01
The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.
The Danish national passenger model – Model specification and results
DEFF Research Database (Denmark)
Rich, Jeppe; Hansen, Christian Overgaard
2016-01-01
The paper describes the structure of the new Danish National Passenger model and provides on this basis a general discussion of large-scale model design, cost-damping and model validation. The paper aims at providing three main contributions to the existing literature. Firstly, at the general level......, the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...
MODELLING OF DYNAMIC SPEED LIMITS USING THE MODEL PREDICTIVE CONTROL
Directory of Open Access Journals (Sweden)
Andrey Borisovich Nikolaev
2017-09-01
Full Text Available The article considers the issues of traffic management using intelligent system “Car-Road” (IVHS, which consist of interacting intelligent vehicles (IV and intelligent roadside controllers. Vehicles are organized in convoy with small distances between them. All vehicles are assumed to be fully automated (throttle control, braking, steering. Proposed approaches for determining speed limits for traffic cars on the motorway using a model predictive control (MPC. The article proposes an approach to dynamic speed limit to minimize the downtime of vehicles in traffic.
Superconducting solenoid model magnet test results
Energy Technology Data Exchange (ETDEWEB)
Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; /Fermilab
2006-08-01
Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests.
Superconducting solenoid model magnet test results
International Nuclear Information System (INIS)
Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; Tompkins, J.C.; Wokas, T.; Fermilab
2006-01-01
Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests
Surovyatkina, Elena; Stolbova, Veronika; Kurths, Jurgen
2017-04-01
started to decrease, and after two days meteorological stations reported 'No rain' in the EG and also in areas located across the subcontinent in the direction from the North Pakistan to the Bay of Bengal. Hence, the date of monsoon withdrawal - October 10-th, predicted 70 days in advance, lies within our prediction interval. Our results show that our method allows predicting a future monsoon, and not only retrospectively or hindcast. In 2016 we predicted of the onset and withdrawal dates of the Southwest monsoon over the Eastern Ghats region in Central India for 40 and 70 days in advance respectively. Our general framework for predicting spatial-temporal critical transitions is applicable for systems of different nature. It allows predicting future from observational data only, when the model of a transition does not exist yet. [1] Stolbova, V., E. Surovyatkina, B. Bookhagen, and J. Kurths (2016): Tipping elements of the Indian monsoon: Prediction of onset and withdrawal. Geophys. Res. Lett., 43, 1-9. [2]https://www.pik-potsdam.de/news/press-releases/indian-monsoon-novel-approach-allows-early-forecasting?set_language=en [3] https://www.pik-potsdam.de/kontakt/pressebuero/fotos/monsoon-withdrawal/view
Methodology and Results of Mathematical Modelling of Complex Technological Processes
Mokrova, Nataliya V.
2018-03-01
The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.
Results of the eruptive column model inter-comparison study
Costa, Antonio; Suzuki, Yujiro; Cerminara, M.; Devenish, Ben J.; Esposti Ongaro, T.; Herzog, Michael; Van Eaton, Alexa; Denby, L.C.; Bursik, Marcus; de' Michieli Vitturi, Mattia; Engwell, S.; Neri, Augusto; Barsotti, Sara; Folch, Arnau; Macedonio, Giovanni; Girault, F.; Carazzo, G.; Tait, S.; Kaminski, E.; Mastin, Larry G.; Woodhouse, Mark J.; Phillips, Jeremy C.; Hogg, Andrew J.; Degruyter, Wim; Bonadonna, Costanza
2016-01-01
This study compares and evaluates one-dimensional (1D) and three-dimensional (3D) numerical models of volcanic eruption columns in a set of different inter-comparison exercises. The exercises were designed as a blind test in which a set of common input parameters was given for two reference eruptions, representing a strong and a weak eruption column under different meteorological conditions. Comparing the results of the different models allows us to evaluate their capabilities and target areas for future improvement. Despite their different formulations, the 1D and 3D models provide reasonably consistent predictions of some of the key global descriptors of the volcanic plumes. Variability in plume height, estimated from the standard deviation of model predictions, is within ~ 20% for the weak plume and ~ 10% for the strong plume. Predictions of neutral buoyancy level are also in reasonably good agreement among the different models, with a standard deviation ranging from 9 to 19% (the latter for the weak plume in a windy atmosphere). Overall, these discrepancies are in the range of observational uncertainty of column height. However, there are important differences amongst models in terms of local properties along the plume axis, particularly for the strong plume. Our analysis suggests that the simplified treatment of entrainment in 1D models is adequate to resolve the general behaviour of the weak plume. However, it is inadequate to capture complex features of the strong plume, such as large vortices, partial column collapse, or gravitational fountaining that strongly enhance entrainment in the lower atmosphere. We conclude that there is a need to more accurately quantify entrainment rates, improve the representation of plume radius, and incorporate the effects of column instability in future versions of 1D volcanic plume models.
A grey NGM(1,1, k) self-memory coupling prediction model for energy consumption prediction.
Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling
2014-01-01
Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.
A Grey NGM(1,1, k) Self-Memory Coupling Prediction Model for Energy Consumption Prediction
Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling
2014-01-01
Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span. PMID:25054174
Solubility of water in fluorocarbons: Experimental and COSMO-RS prediction results
International Nuclear Information System (INIS)
Freire, Mara G.; Carvalho, Pedro J.; Santos, Luis M.N.B.F.; Gomes, Ligia R.; Marrucho, Isabel M.; Coutinho, Joao A.P.
2010-01-01
This work aims at providing experimental and theoretical information about the water-perfluorocarbon molecular interactions. For that purpose, experimental solubility results for water in cyclic and aromatic perfluorocarbons (PFCs), over the temperature range between (288.15 and 318.15) K, and at atmospheric pressure, were obtained and are presented. From the experimental solubility dependence on temperature, the partial molar solution and solvation thermodynamic functions such as Gibbs free energy, enthalpy and entropy were determined and are discussed. The process of dissolution of water in PFCs is shown to be spontaneous for cyclic and aromatic compounds. It is demonstrated that the interactions between the non-aromatic PFCs and water are negligible while those between aromatic PFCs and water are favourable. The COSMO-RS predictive capability was explored for the description of the water solubility in PFCs and others substituted fluorocompounds. The COSMO-RS is shown to be a useful model to provide reasonable predictions of the solubility values, as well as to describe their temperature and structural modifications dependence. Moreover, the molar Gibbs free energy and molar enthalpy of solution of water are predicted remarkably well by COSMO-RS while the main deviations appear for the prediction of the molar entropy of solution.
MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models
Son, S. W.; Lim, Y.; Kim, D.
2017-12-01
The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.
Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics
Directory of Open Access Journals (Sweden)
Cecilia Noecker
2015-03-01
Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral
On AEP prediction and wake modelling at Anholt
DEFF Research Database (Denmark)
Pena Diaz, Alfredo; Hansen, Kurt Schaldemose; Volker, Patrick
and direction. We show that the WRF model is able to reproduce such gradients relatively well by comparison to the wind farm’s SCADA. About 1.5 yr of such SCADA, further quality controlled and filtered, reveals an average wake loss of 3.87% only, whereas results from three wake models, Park, Larsen and FUGA......, show average wake losses of 3.46%, 3.69%, and 3.38%, respectively. We employ a bootstrap method to estimate the uncertainty of the wake models. As this is performed with reference to the SCADA, the results provide an idea of the uncertainty of the AEP prediction2. We find all wake models...
International Nuclear Information System (INIS)
Laan, J.G. van der; Akiba, M.; Seki, M.; Hassanein, A.; Tanchuk, V.
1991-01-01
An evaluation is given for the prediction for disruption erosion in the International Thermonuclear Engineering Reactor (ITER). At first, a description is given of the relation between plasma operating paramters and system dimensions to the predictions of loading parameters of Plasma Facing Components (PFC) in off-normal events. Numerical results from ITER parties on the prediction of disruption erosion are compared for a few typical cases and discussed. Apart from some differences in the codes, the observed discrepancies can be ascribed to different input data of material properties and boundary conditions. Some physical models for vapour shielding and their effects on numerical results are mentioned. Experimental results from ITER parties, obtained with electron and laser beams, are also compared. Erosion rates for the candidate ITER PFC materials are shown to depend very strongly on the energy deposition parameters, which are based on plasma physics considerations, and on the assumed material loss mechanisms. Lifetimes estimates for divertor plate and first wall armour are given for carbon, tungsten and beryllium, based on the erosion in the thermal quench phase. (orig.)
Predicting the emission from an incineration plant - a modelling approach
International Nuclear Information System (INIS)
Rohyiza Baan
2004-01-01
The emissions from combustion process of Municipal Solid Waste (MSW) have become an important issue in incineration technology. Resulting from unstable combustion conditions, the formation of undesirable compounds such as CO, SO 2 , NO x , PM 10 and dioxin become the source of pollution concentration in the atmosphere. The impact of emissions on criteria air pollutant concentrations could be obtained directly using ambient air monitoring equipment or predicted using dispersion modelling. Literature shows that the complicated atmospheric processes that occur in nature can be described using mathematical models. This paper will highlight the air dispersion model as a tool to relate and simulate the release and dispersion of air pollutants in the atmosphere. The technique is based on a programming approach to develop the air dispersion ground level concentration model with the use of Gaussian and Pasquil equation. This model is useful to study the consequences of various sources of air pollutant and estimating the amount of pollutants released into the air from existing emission sources. From this model, it was found that the difference in percentage of data between actual conditions and the model's prediction is about 5%. (Author)
Models for predicting objective function weights in prostate cancer IMRT
International Nuclear Information System (INIS)
Boutilier, Justin J.; Lee, Taewoo; Craig, Tim; Sharpe, Michael B.; Chan, Timothy C. Y.
2015-01-01
Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and applied three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR
Models for predicting objective function weights in prostate cancer IMRT
Energy Technology Data Exchange (ETDEWEB)
Boutilier, Justin J., E-mail: j.boutilier@mail.utoronto.ca; Lee, Taewoo [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, Ontario M5S 3G8 (Canada); Craig, Tim [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, 610 University of Avenue, Toronto, Ontario M5T 2M9, Canada and Department of Radiation Oncology, University of Toronto, 148 - 150 College Street, Toronto, Ontario M5S 3S2 (Canada); Sharpe, Michael B. [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, 610 University of Avenue, Toronto, Ontario M5T 2M9 (Canada); Department of Radiation Oncology, University of Toronto, 148 - 150 College Street, Toronto, Ontario M5S 3S2 (Canada); Techna Institute for the Advancement of Technology for Health, 124 - 100 College Street, Toronto, Ontario M5G 1P5 (Canada); Chan, Timothy C. Y. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, Ontario M5S 3G8, Canada and Techna Institute for the Advancement of Technology for Health, 124 - 100 College Street, Toronto, Ontario M5G 1P5 (Canada)
2015-04-15
Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and applied three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR
Modelling the electrical properties of concrete for shielding effectiveness prediction
International Nuclear Information System (INIS)
Sandrolini, L; Reggiani, U; Ogunsola, A
2007-01-01
Concrete is a porous, heterogeneous material whose abundant use in numerous applications demands a detailed understanding of its electrical properties. Besides experimental measurements, material theoretical models can be useful to investigate its behaviour with respect to frequency, moisture content or other factors. These models can be used in electromagnetic compatibility (EMC) to predict the shielding effectiveness of a concrete structure against external electromagnetic waves. This paper presents the development of a dispersive material model for concrete out of experimental measurement data to take account of the frequency dependence of concrete's electrical properties. The model is implemented into a numerical simulator and compared with the classical transmission-line approach in shielding effectiveness calculations of simple concrete walls of different moisture content. The comparative results show good agreement in all cases; a possible relation between shielding effectiveness and the electrical properties of concrete and the limits of the proposed model are discussed
Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M
2013-10-01
Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Social network models predict movement and connectivity in ecological landscapes
Fletcher, Robert J.; Acevedo, M.A.; Reichert, Brian E.; Pias, Kyle E.; Kitchens, Wiley M.
2011-01-01
Network analysis is on the rise across scientific disciplines because of its ability to reveal complex, and often emergent, patterns and dynamics. Nonetheless, a growing concern in network analysis is the use of limited data for constructing networks. This concern is strikingly relevant to ecology and conservation biology, where network analysis is used to infer connectivity across landscapes. In this context, movement among patches is the crucial parameter for interpreting connectivity but because of the difficulty of collecting reliable movement data, most network analysis proceeds with only indirect information on movement across landscapes rather than using observed movement to construct networks. Statistical models developed for social networks provide promising alternatives for landscape network construction because they can leverage limited movement information to predict linkages. Using two mark-recapture datasets on individual movement and connectivity across landscapes, we test whether commonly used network constructions for interpreting connectivity can predict actual linkages and network structure, and we contrast these approaches to social network models. We find that currently applied network constructions for assessing connectivity consistently, and substantially, overpredict actual connectivity, resulting in considerable overestimation of metapopulation lifetime. Furthermore, social network models provide accurate predictions of network structure, and can do so with remarkably limited data on movement. Social network models offer a flexible and powerful way for not only understanding the factors influencing connectivity but also for providing more reliable estimates of connectivity and metapopulation persistence in the face of limited data.
Social network models predict movement and connectivity in ecological landscapes.
Fletcher, Robert J; Acevedo, Miguel A; Reichert, Brian E; Pias, Kyle E; Kitchens, Wiley M
2011-11-29
Network analysis is on the rise across scientific disciplines because of its ability to reveal complex, and often emergent, patterns and dynamics. Nonetheless, a growing concern in network analysis is the use of limited data for constructing networks. This concern is strikingly relevant to ecology and conservation biology, where network analysis is used to infer connectivity across landscapes. In this context, movement among patches is the crucial parameter for interpreting connectivity but because of the difficulty of collecting reliable movement data, most network analysis proceeds with only indirect information on movement across landscapes rather than using observed movement to construct networks. Statistical models developed for social networks provide promising alternatives for landscape network construction because they can leverage limited movement information to predict linkages. Using two mark-recapture datasets on individual movement and connectivity across landscapes, we test whether commonly used network constructions for interpreting connectivity can predict actual linkages and network structure, and we contrast these approaches to social network models. We find that currently applied network constructions for assessing connectivity consistently, and substantially, overpredict actual connectivity, resulting in considerable overestimation of metapopulation lifetime. Furthermore, social network models provide accurate predictions of network structure, and can do so with remarkably limited data on movement. Social network models offer a flexible and powerful way for not only understanding the factors influencing connectivity but also for providing more reliable estimates of connectivity and metapopulation persistence in the face of limited data.
Using Random Forest Models to Predict Organizational Violence
Levine, Burton; Bobashev, Georgly
2012-01-01
We present a methodology to access the proclivity of an organization to commit violence against nongovernment personnel. We fitted a Random Forest model using the Minority at Risk Organizational Behavior (MAROS) dataset. The MAROS data is longitudinal; so, individual observations are not independent. We propose a modification to the standard Random Forest methodology to account for the violation of the independence assumption. We present the results of the model fit, an example of predicting violence for an organization; and finally, we present a summary of the forest in a "meta-tree,"
Butterfly, Recurrence, and Predictability in Lorenz Models
Shen, B. W.
2017-12-01
Over the span of 50 years, the original three-dimensional Lorenz model (3DLM; Lorenz,1963) and its high-dimensional versions (e.g., Shen 2014a and references therein) have been used for improving our understanding of the predictability of weather and climate with a focus on chaotic responses. Although the Lorenz studies focus on nonlinear processes and chaotic dynamics, people often apply a "linear" conceptual model to understand the nonlinear processes in the 3DLM. In this talk, we present examples to illustrate the common misunderstandings regarding butterfly effect and discuss the importance of solutions' recurrence and boundedness in the 3DLM and high-dimensional LMs. The first example is discussed with the following folklore that has been widely used as an analogy of the butterfly effect: "For want of a nail, the shoe was lost.For want of a shoe, the horse was lost.For want of a horse, the rider was lost.For want of a rider, the battle was lost.For want of a battle, the kingdom was lost.And all for the want of a horseshoe nail."However, in 2008, Prof. Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability; and that the verse implicitly suggests that subsequent small events will not reverse the outcome (Lorenz, 2008). Lorenz's comments suggest that the verse neither describes negative (nonlinear) feedback nor indicates recurrence, the latter of which is required for the appearance of a butterfly pattern. The second example is to illustrate that the divergence of two nearby trajectories should be bounded and recurrent, as shown in Figure 1. Furthermore, we will discuss how high-dimensional LMs were derived to illustrate (1) negative nonlinear feedback that stabilizes the system within the five- and seven-dimensional LMs (5D and 7D LMs; Shen 2014a; 2015a; 2016); (2) positive nonlinear feedback that destabilizes the system within the 6D and 8D LMs (Shen 2015b; 2017); and (3
A Neural Network Model for Prediction of Sound Quality
DEFF Research Database (Denmark)
Nielsen,, Lars Bramsløw
An artificial neural network structure has been specified, implemented and optimized for the purpose of predicting the perceived sound quality for normal-hearing and hearing-impaired subjects. The network was implemented by means of commercially available software and optimized to predict results...... obtained in subjective sound quality rating experiments based on input data from an auditory model. Various types of input data and data representations from the auditory model were used as input data for the chosen network structure, which was a three-layer perceptron. This network was trained by means...... the physical signal parameters and the subjectively perceived sound quality. No simple objective-subjective relationship was evident from this analysis....
Auditing predictive models : a case study in crop growth
Metselaar, K.
1999-01-01
Methods were developed to assess and quantify the predictive quality of simulation models, with the intent to contribute to evaluation of model studies by non-scientists. In a case study, two models of different complexity, LINTUL and SUCROS87, were used to predict yield of forage maize
Models for predicting compressive strength and water absorption of ...
African Journals Online (AJOL)
This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...
Statistical and Machine Learning Models to Predict Programming Performance
Bergin, Susan
2006-01-01
This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...
Genomic value prediction for quantitative traits under the epistatic model
Directory of Open Access Journals (Sweden)
Xu Shizhong
2011-01-01
Full Text Available Abstract Background Most quantitative traits are controlled by multiple quantitative trait loci (QTL. The contribution of each locus may be negligible but the collective contribution of all loci is usually significant. Genome selection that uses markers of the entire genome to predict the genomic values of individual plants or animals can be more efficient than selection on phenotypic values and pedigree information alone for genetic improvement. When a quantitative trait is contributed by epistatic effects, using all markers (main effects and marker pairs (epistatic effects to predict the genomic values of plants can achieve the maximum efficiency for genetic improvement. Results In this study, we created 126 recombinant inbred lines of soybean and genotyped 80 makers across the genome. We applied the genome selection technique to predict the genomic value of somatic embryo number (a quantitative trait for each line. Cross validation analysis showed that the squared correlation coefficient between the observed and predicted embryo numbers was 0.33 when only main (additive effects were used for prediction. When the interaction (epistatic effects were also included in the model, the squared correlation coefficient reached 0.78. Conclusions This study provided an excellent example for the application of genome selection to plant breeding.
Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements
Sato, Naoyuki; Yamaguchi, Yoko
Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.
Effects of uncertainty in model predictions of individual tree volume on large area volume estimates
Ronald E. McRoberts; James A. Westfall
2014-01-01
Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...
Dinucleotide controlled null models for comparative RNA gene prediction
Directory of Open Access Journals (Sweden)
Gesell Tanja
2008-05-01
Full Text Available Abstract Background Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. Results We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. Conclusion SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require
Development of a Predictive Model for Induction Success of Labour
Directory of Open Access Journals (Sweden)
Cristina Pruenza
2018-03-01
Full Text Available Induction of the labour process is an extraordinarily common procedure used in some pregnancies. Obstetricians face the need to end a pregnancy, for medical reasons usually (maternal or fetal requirements or less frequently, social (elective inductions for convenience. The success of induction procedure is conditioned by a multitude of maternal and fetal variables that appear before or during pregnancy or birth process, with a low predictive value. The failure of the induction process involves performing a caesarean section. This project arises from the clinical need to resolve a situation of uncertainty that occurs frequently in our clinical practice. Since the weight of clinical variables is not adequately weighted, we consider very interesting to know a priori the possibility of success of induction to dismiss those inductions with high probability of failure, avoiding unnecessary procedures or postponing end if possible. We developed a predictive model of induced labour success as a support tool in clinical decision making. Improve the predictability of a successful induction is one of the current challenges of Obstetrics because of its negative impact. The identification of those patients with high chances of failure, will allow us to offer them better care improving their health outcomes (adverse perinatal outcomes for mother and newborn, costs (medication, hospitalization, qualified staff and patient perceived quality. Therefore a Clinical Decision Support System was developed to give support to the Obstetricians. In this article, we had proposed a robust method to explore and model a source of clinical information with the purpose of obtaining all possible knowledge. Generally, in classification models are difficult to know the contribution that each attribute provides to the model. We had worked in this direction to offer transparency to models that may be considered as black boxes. The positive results obtained from both the
Probabilistic Modeling and Visualization for Bankruptcy Prediction
DEFF Research Database (Denmark)
Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara
2017-01-01
In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...... studies on bankruptcy detection, seldom probabilistic approaches were carried out. In this paper we assume a probabilistic point-of-view by applying Gaussian Processes (GP) in the context of bankruptcy prediction, comparing it against the Support Vector Machines (SVM) and the Logistic Regression (LR......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...
A Grey NGM(1,1,k Self-Memory Coupling Prediction Model for Energy Consumption Prediction
Directory of Open Access Journals (Sweden)
Xiaojun Guo
2014-01-01
Full Text Available Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1,k self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1,k model. The traditional grey model’s weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1,k self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.
A new ensemble model for short term wind power prediction
DEFF Research Database (Denmark)
Madsen, Henrik; Albu, Razvan-Daniel; Felea, Ioan
2012-01-01
As the objective of this study, a non-linear ensemble system is used to develop a new model for predicting wind speed in short-term time scale. Short-term wind power prediction becomes an extremely important field of research for the energy sector. Regardless of the recent advancements in the re-search...... of prediction models, it was observed that different models have different capabilities and also no single model is suitable under all situations. The idea behind EPS (ensemble prediction systems) is to take advantage of the unique features of each subsystem to detain diverse patterns that exist in the dataset...
Testing the predictive power of nuclear mass models
International Nuclear Information System (INIS)
Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.
2008-01-01
A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool
Scale Model Thruster Acoustic Measurement Results
Vargas, Magda; Kenny, R. Jeremy
2013-01-01
The Space Launch System (SLS) Scale Model Acoustic Test (SMAT) is a 5% scale representation of the SLS vehicle, mobile launcher, tower, and launch pad trench. The SLS launch propulsion system will be comprised of the Rocket Assisted Take-Off (RATO) motors representing the solid boosters and 4 Gas Hydrogen (GH2) thrusters representing the core engines. The GH2 thrusters were tested in a horizontal configuration in order to characterize their performance. In Phase 1, a single thruster was fired to determine the engine performance parameters necessary for scaling a single engine. A cluster configuration, consisting of the 4 thrusters, was tested in Phase 2 to integrate the system and determine their combined performance. Acoustic and overpressure data was collected during both test phases in order to characterize the system's acoustic performance. The results from the single thruster and 4- thuster system are discussed and compared.
CMS standard model Higgs boson results
Directory of Open Access Journals (Sweden)
Garcia-Abia Pablo
2013-11-01
Full Text Available In July 2012 CMS announced the discovery of a new boson with properties resembling those of the long-sought Higgs boson. The analysis of the proton-proton collision data recorded by the CMS detector at the LHC, corresponding to integrated luminosities of 5.1 fb−1 at √s = 7 TeV and 19.6 fb−1 at √s = 8 TeV, confirm the Higgs-like nature of the new boson, with a signal strength associated with vector bosons and fermions consistent with the expectations for a standard model (SM Higgs boson, and spin-parity clearly favouring the scalar nature of the new boson. In this note I review the updated results of the CMS experiment.
Modelling Extortion Racket Systems: Preliminary Results
Nardin, Luis G.; Andrighetto, Giulia; Székely, Áron; Conte, Rosaria
Mafias are highly powerful and deeply entrenched organised criminal groups that cause both economic and social damage. Overcoming, or at least limiting, their harmful effects is a societally beneficial objective, which renders its dynamics understanding an objective of both scientific and political interests. We propose an agent-based simulation model aimed at understanding how independent and combined effects of legal and social norm-based processes help to counter mafias. Our results show that legal processes are effective in directly countering mafias by reducing their activities and changing the behaviour of the rest of population, yet they are not able to change people's mind-set that renders the change fragile. When combined with social norm-based processes, however, people's mind-set shifts towards a culture of legality rendering the observed behaviour resilient to change.
New results in the Dual Parton Model
International Nuclear Information System (INIS)
Van, J.T.T.; Capella, A.
1984-01-01
In this paper, the similarity between the x distribution for particle production and the fragmentation functions are observed in e+e- collisions and in deep inelastic scattering are presented. Based on the observation, the authors develop a complete approach to multiparticle production which incorporates the most important features and concepts learned about high energy collisions. 1. Topological expansion : the dominant diagram at high energy corresponds to the simplest topology. 2. Unitarity : diagrams of various topology contribute to the cross sections in a way that unitary is preserved. 3. Regge behaviour and Duality. 4. Partonic structure of hadrons. These general theoretical ideas, result from many joint experimental and theoretical efforts on the study of soft hadron physics. The dual parton model is able to explain all the experimental features from FNAL to SPS collider energies. It has all the properties of an S-matrix theory and provides a unified description of hadron-hadron, hadron-nucleus and nucleus-nucleus collisions
Mirams, Gary R; Davies, Mark R; Brough, Stephen J; Bridgland-Taylor, Matthew H; Cui, Yi; Gavaghan, David J; Abi-Gerges, Najah
2014-01-01
Detection of drug-induced pro-arrhythmic risk is a primary concern for pharmaceutical companies and regulators. Increased risk is linked to prolongation of the QT interval on the body surface ECG. Recent studies have shown that multiple ion channel interactions can be required to predict changes in ventricular repolarisation and therefore QT intervals. In this study we attempt to predict the result of the human clinical Thorough QT (TQT) study, using multiple ion channel screening which is available early in drug development. Ion current reduction was measured, in the presence of marketed drugs which have had a TQT study, for channels encoded by hERG, CaV1.2, NaV1.5, KCNQ1/MinK, and Kv4.3/KChIP2.2. The screen was performed on two platforms - IonWorks Quattro (all 5 channels, 34 compounds), and IonWorks Barracuda (hERG & CaV1.2, 26 compounds). Concentration-effect curves were fitted to the resulting data, and used to calculate a percentage reduction in each current at a given concentration. Action potential simulations were then performed using the ten Tusscher and Panfilov (2006), Grandi et al. (2010) and O'Hara et al. (2011) human ventricular action potential models, pacing at 1Hz and running to steady state, for a range of concentrations. We compared simulated action potential duration predictions with the QT prolongation observed in the TQT studies. At the estimated concentrations, simulations tended to underestimate any observed QT prolongation. When considering a wider range of concentrations, and conventional patch clamp rather than screening data for hERG, prolongation of ≥5ms was predicted with up to 79% sensitivity and 100% specificity. This study provides a proof-of-principle for the prediction of human TQT study results using data available early in drug development. We highlight a number of areas that need refinement to improve the method's predictive power, but the results suggest that such approaches will provide a useful tool in cardiac safety
The prediction of late-onset preeclampsia: Results from a longitudinal proteomics study
Erez, Offer; Romero, Roberto; Maymon, Eli; Chaemsaithong, Piya; Done, Bogdan; Pacora, Percy; Panaitescu, Bogdan; Chaiworapongsa, Tinnakorn; Hassan, Sonia S.
2017-01-01
Background Late-onset preeclampsia is the most prevalent phenotype of this syndrome; nevertheless, only a few biomarkers for its early diagnosis have been reported. We sought to correct this deficiency using a high through-put proteomic platform. Methods A case-control longitudinal study was conducted, including 90 patients with normal pregnancies and 76 patients with late-onset preeclampsia (diagnosed at ≥34 weeks of gestation). Maternal plasma samples were collected throughout gestation (normal pregnancy: 2–6 samples per patient, median of 2; late-onset preeclampsia: 2–6, median of 5). The abundance of 1,125 proteins was measured using an aptamers-based proteomics technique. Protein abundance in normal pregnancies was modeled using linear mixed-effects models to estimate mean abundance as a function of gestational age. Data was then expressed as multiples of-the-mean (MoM) values in normal pregnancies. Multi-marker prediction models were built using data from one of five gestational age intervals (8–16, 16.1–22, 22.1–28, 28.1–32, 32.1–36 weeks of gestation). The predictive performance of the best combination of proteins was compared to placental growth factor (PIGF) using bootstrap. Results 1) At 8–16 weeks of gestation, the best prediction model included only one protein, matrix metalloproteinase 7 (MMP-7), that had a sensitivity of 69% at a false positive rate (FPR) of 20% (AUC = 0.76); 2) at 16.1–22 weeks of gestation, MMP-7 was the single best predictor of late-onset preeclampsia with a sensitivity of 70% at a FPR of 20% (AUC = 0.82); 3) after 22 weeks of gestation, PlGF was the best predictor of late-onset preeclampsia, identifying 1/3 to 1/2 of the patients destined to develop this syndrome (FPR = 20%); 4) 36 proteins were associated with late-onset preeclampsia in at least one interval of gestation (after adjustment for covariates); 5) several biological processes, such as positive regulation of vascular endothelial growth factor
Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J
2011-07-01
The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.
Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate
Directory of Open Access Journals (Sweden)
Minh Vu Trieu
2017-03-01
Full Text Available This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS, Brazilian tensile strength (BTS, rock brittleness index (BI, the distance between planes of weakness (DPW, and the alpha angle (Alpha between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP. Four (4 statistical regression models (two linear and two nonlinear are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2 of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.
Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate
Minh, Vu Trieu; Katushin, Dmitri; Antonov, Maksim; Veinthal, Renno
2017-03-01
This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM) based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), rock brittleness index (BI), the distance between planes of weakness (DPW), and the alpha angle (Alpha) between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP). Four (4) statistical regression models (two linear and two nonlinear) are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2) of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.
From Predictive Models to Instructional Policies
Rollinson, Joseph; Brunskill, Emma
2015-01-01
At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…
Automatic prediction of facial trait judgments: appearance vs. structural models.
Directory of Open Access Journals (Sweden)
Mario Rojas
Full Text Available Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a derive a facial trait judgment model from training data and b predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations and classification rules (4 rules suggest that a prediction of perception of facial traits is learnable by both holistic and structural approaches; b the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.
Predicting coastal cliff erosion using a Bayesian probabilistic model
Hapke, Cheryl J.; Plant, Nathaniel G.
2010-01-01
Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70–90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale.
Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model
Wang, Qijie
2015-08-01
The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.
Febrian Umbara, Rian; Tarwidi, Dede; Budi Setiawan, Erwin
2018-03-01
The paper discusses the prediction of Jakarta Composite Index (JCI) in Indonesia Stock Exchange. The study is based on JCI historical data for 1286 days to predict the value of JCI one day ahead. This paper proposes predictions done in two stages., The first stage using Fuzzy Time Series (FTS) to predict values of ten technical indicators, and the second stage using Support Vector Regression (SVR) to predict the value of JCI one day ahead, resulting in a hybrid prediction model FTS-SVR. The performance of this combined prediction model is compared with the performance of the single stage prediction model using SVR only. Ten technical indicators are used as input for each model.
The Complexity of Developmental Predictions from Dual Process Models
Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.
2011-01-01
Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…
Sweat loss prediction using a multi-model approach.
Xu, Xiaojiang; Santee, William R
2011-07-01
A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.
Predictive model for determining the quality of a call
Voznak, M.; Rozhon, J.; Partila, P.; Safarik, J.; Mikulec, M.; Mehic, M.
2014-05-01
In this paper the predictive model for speech quality estimation is described. This model allows its user to gain the information about the speech quality in VoIP networks without the need of performing the actual call and the consecutive time consuming sound file evaluation. This rapidly increases usability of the speech quality measurement especially in high load networks, where the actual processing of all calls is rendered difficult or even impossible. This model can reach its results that are highly conformant with the PESQ algorithm only based on the network state parameters that are easily obtainable by the commonly used software tools. Experiments were carried out to investigate whether different languages (English, Czech) have an effect on perceived voice quality for the same network conditions and the language factor was incorporated directly into the model.
Sensitivity analysis of predictive models with an automated adjoint generator
International Nuclear Information System (INIS)
Pin, F.G.; Oblow, E.M.
1987-01-01
The adjoint method is a well established sensitivity analysis methodology that is particularly efficient in large-scale modeling problems. The coefficients of sensitivity of a given response with respect to every parameter involved in the modeling code can be calculated from the solution of a single adjoint run of the code. Sensitivity coefficients provide a quantitative measure of the importance of the model data in calculating the final results. The major drawback of the adjoint method is the requirement for calculations of very large numbers of partial derivatives to set up the adjoint equations of the model. ADGEN is a software system that has been designed to eliminate this drawback and automatically implement the adjoint formulation in computer codes. The ADGEN system will be described and its use for improving performance assessments and predictive simulations will be discussed. 8 refs., 1 fig
Optimization of arterial age prediction models based in pulse wave
Energy Technology Data Exchange (ETDEWEB)
Scandurra, A G [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Meschino, G J [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Passoni, L I [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Dai Pra, A L [Engineering Aplied Artificial Intelligence Group, Mathematics Department, Mar del Plata University (Argentina); Introzzi, A R [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Clara, F M [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina)
2007-11-15
We propose the detection of early arterial ageing through a prediction model of arterial age based in the coherence assumption between the pulse wave morphology and the patient's chronological age. Whereas we evaluate several methods, a Sugeno fuzzy inference system is selected. Models optimization is approached using hybrid methods: parameter adaptation with Artificial Neural Networks and Genetic Algorithms. Features selection was performed according with their projection on main factors of the Principal Components Analysis. The model performance was tested using the bootstrap error type .632E. The model presented an error smaller than 8.5%. This result encourages including this process as a diagnosis module into the device for pulse analysis that has been developed by the Bioengineering Laboratory staff.
Mathematical models to predict rheological parameters of lateritic hydromixtures
Directory of Open Access Journals (Sweden)
Gabriel Hernández-Ramírez
2017-10-01
Full Text Available The present work had as objective to establish mathematical models that allow the prognosis of the rheological parameters of the lateritic pulp at concentrations of solids from 35% to 48%, temperature of the preheated hydromixture superior to 82 ° C and number of mineral between 3 and 16. Four samples of lateritic pulp were used in the study at different process locations. The results allowed defining that the plastic properties of the lateritic pulp in the conditions of this study conform to the Herschel-Bulkley model for real plastics. In addition, they show that for current operating conditions, even for new situations, UPD mathematical models have a greater ability to predict rheological parameters than least squares mathematical models.
Optimization of arterial age prediction models based in pulse wave
International Nuclear Information System (INIS)
Scandurra, A G; Meschino, G J; Passoni, L I; Dai Pra, A L; Introzzi, A R; Clara, F M
2007-01-01
We propose the detection of early arterial ageing through a prediction model of arterial age based in the coherence assumption between the pulse wave morphology and the patient's chronological age. Whereas we evaluate several methods, a Sugeno fuzzy inference system is selected. Models optimization is approached using hybrid methods: parameter adaptation with Artificial Neural Networks and Genetic Algorithms. Features selection was performed according with their projection on main factors of the Principal Components Analysis. The model performance was tested using the bootstrap error type .632E. The model presented an error smaller than 8.5%. This result encourages including this process as a diagnosis module into the device for pulse analysis that has been developed by the Bioengineering Laboratory staff
Early experiences building a software quality prediction model
Agresti, W. W.; Evanco, W. M.; Smith, M. C.
1990-01-01
Early experiences building a software quality prediction model are discussed. The overall research objective is to establish a capability to project a software system's quality from an analysis of its design. The technical approach is to build multivariate models for estimating reliability and maintainability. Data from 21 Ada subsystems were analyzed to test hypotheses about various design structures leading to failure-prone or unmaintainable systems. Current design variables highlight the interconnectivity and visibility of compilation units. Other model variables provide for the effects of reusability and software changes. Reported results are preliminary because additional project data is being obtained and new hypotheses are being developed and tested. Current multivariate regression models are encouraging, explaining 60 to 80 percent of the variation in error density of the subsystems.
Comparisons of Faulting-Based Pavement Performance Prediction Models
Directory of Open Access Journals (Sweden)
Weina Wang
2017-01-01
Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.
Climate Modeling and Causal Identification for Sea Ice Predictability
Energy Technology Data Exchange (ETDEWEB)
Hunke, Elizabeth Clare [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urrego Blanco, Jorge Rolando [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urban, Nathan Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-02-12
This project aims to better understand causes of ongoing changes in the Arctic climate system, particularly as decreasing sea ice trends have been observed in recent decades and are expected to continue in the future. As part of the Sea Ice Prediction Network, a multi-agency effort to improve sea ice prediction products on seasonal-to-interannual time scales, our team is studying sensitivity of sea ice to a collection of physical process and feedback mechanism in the coupled climate system. During 2017 we completed a set of climate model simulations using the fully coupled ACME-HiLAT model. The simulations consisted of experiments in which cloud, sea ice, and air-ocean turbulent exchange parameters previously identified as important for driving output uncertainty in climate models were perturbed to account for parameter uncertainty in simulated climate variables. We conducted a sensitivity study to these parameters, which built upon a previous study we made for standalone simulations (Urrego-Blanco et al., 2016, 2017). Using the results from the ensemble of coupled simulations, we are examining robust relationships between climate variables that emerge across the experiments. We are also using causal discovery techniques to identify interaction pathways among climate variables which can help identify physical mechanisms and provide guidance in predictability studies. This work further builds on and leverages the large ensemble of standalone sea ice simulations produced in our previous w14_seaice project.
Risk assessment and remedial policy evaluation using predictive modeling
International Nuclear Information System (INIS)
Linkov, L.; Schell, W.R.
1996-01-01
As a result of nuclear industry operation and accidents, large areas of natural ecosystems have been contaminated by radionuclides and toxic metals. Extensive societal pressure has been exerted to decrease the radiation dose to the population and to the environment. Thus, in making abatement and remediation policy decisions, not only economic costs but also human and environmental risk assessments are desired. This paper introduces a general framework for risk assessment and remedial policy evaluation using predictive modeling. Ecological risk assessment requires evaluation of the radionuclide distribution in ecosystems. The FORESTPATH model is used for predicting the radionuclide fate in forest compartments after deposition as well as for evaluating the efficiency of remedial policies. Time of intervention and radionuclide deposition profile was predicted as being crucial for the remediation efficiency. Risk assessment conducted for a critical group of forest users in Belarus shows that consumption of forest products (berries and mushrooms) leads to about 0.004% risk of a fatal cancer annually. Cost-benefit analysis for forest cleanup suggests that complete removal of organic layer is too expensive for application in Belarus and a better methodology is required. In conclusion, FORESTPATH modeling framework could have wide applications in environmental remediation of radionuclides and toxic metals as well as in dose reconstruction and, risk-assessment
Developing and Validating a Predictive Model for Stroke Progression
Directory of Open Access Journals (Sweden)
L.E. Craig
2011-12-01
Full Text Available Background: Progression is believed to be a common and important complication in acute stroke, and has been associated with increased mortality and morbidity. Reliable identification of predictors of early neurological deterioration could potentially benefit routine clinical care. The aim of this study was to identify predictors of early stroke progression using two independent patient cohorts. Methods: Two patient cohorts were used for this study – the first cohort formed the training data set, which included consecutive patients admitted to an urban teaching hospital between 2000 and 2002, and the second cohort formed the test data set, which included patients admitted to the same hospital between 2003 and 2004. A standard definition of stroke progression was used. The first cohort (n = 863 was used to develop the model. Variables that were statistically significant (p 0.1 in turn. The second cohort (n = 216 was used to test the performance of the model. The performance of the predictive model was assessed in terms of both calibration and discrimination. Multiple imputation methods were used for dealing with the missing values. Results: Variables shown to be significant predictors of stroke progression were conscious level, history of coronary heart disease, presence of hyperosmolarity, CT lesion, living alone on admission, Oxfordshire Community Stroke Project classification, presence of pyrexia and smoking status. The model appears to have reasonable discriminative properties [the median receiver-operating characteristic curve value was 0.72 (range 0.72–0.73] and to fit well with the observed data, which is indicated by the high goodness-of-fit p value [the median p value from the Hosmer-Lemeshow test was 0.90 (range 0.50–0.92]. Conclusion: The predictive model developed in this study contains variables that can be easily collected in practice therefore increasing its usability in clinical practice. Using this analysis approach, the
Developing and Validating a Predictive Model for Stroke Progression
Craig, L.E.; Wu, O.; Gilmour, H.; Barber, M.; Langhorne, P.
2011-01-01
Background Progression is believed to be a common and important complication in acute stroke, and has been associated with increased mortality and morbidity. Reliable identification of predictors of early neurological deterioration could potentially benefit routine clinical care. The aim of this study was to identify predictors of early stroke progression using two independent patient cohorts. Methods Two patient cohorts were used for this study – the first cohort formed the training data set, which included consecutive patients admitted to an urban teaching hospital between 2000 and 2002, and the second cohort formed the test data set, which included patients admitted to the same hospital between 2003 and 2004. A standard definition of stroke progression was used. The first cohort (n = 863) was used to develop the model. Variables that were statistically significant (p 0.1) in turn. The second cohort (n = 216) was used to test the performance of the model. The performance of the predictive model was assessed in terms of both calibration and discrimination. Multiple imputation methods were used for dealing with the missing values. Results Variables shown to be significant predictors of stroke progression were conscious level, history of coronary heart disease, presence of hyperosmolarity, CT lesion, living alone on admission, Oxfordshire Community Stroke Project classification, presence of pyrexia and smoking status. The model appears to have reasonable discriminative properties [the median receiver-operating characteristic curve value was 0.72 (range 0.72–0.73)] and to fit well with the observed data, which is indicated by the high goodness-of-fit p value [the median p value from the Hosmer-Lemeshow test was 0.90 (range 0.50–0.92)]. Conclusion The predictive model developed in this study contains variables that can be easily collected in practice therefore increasing its usability in clinical practice. Using this analysis approach, the discrimination and
Regional differences in prediction models of lung function in Germany
Directory of Open Access Journals (Sweden)
Schäper Christoph
2010-04-01
Full Text Available Abstract Background Little is known about the influencing potential of specific characteristics on lung function in different populations. The aim of this analysis was to determine whether lung function determinants differ between subpopulations within Germany and whether prediction equations developed for one subpopulation are also adequate for another subpopulation. Methods Within three studies (KORA C, SHIP-I, ECRHS-I in different areas of Germany 4059 adults performed lung function tests. The available data consisted of forced expiratory volume in one second, forced vital capacity and peak expiratory flow rate. For each study multivariate regression models were developed to predict lung function and Bland-Altman plots were established to evaluate the agreement between predicted and measured values. Results The final regression equations for FEV1 and FVC showed adjusted r-square values between 0.65 and 0.75, and for PEF they were between 0.46 and 0.61. In all studies gender, age, height and pack-years were significant determinants, each with a similar effect size. Regarding other predictors there were some, although not statistically significant, differences between the studies. Bland-Altman plots indicated that the regression models for each individual study adequately predict medium (i.e. normal but not extremely high or low lung function values in the whole study population. Conclusions Simple models with gender, age and height explain a substantial part of lung function variance whereas further determinants add less than 5% to the total explained r-squared, at least for FEV1 and FVC. Thus, for different adult subpopulations of Germany one simple model for each lung function measures is still sufficient.
Predictive modelling of gene expression from transcriptional regulatory elements.
Budden, David M; Hurley, Daniel G; Crampin, Edmund J
2015-07-01
Predictive modelling of gene expression provides a powerful framework for exploring the regulatory logic underpinning transcriptional regulation. Recent studies have demonstrated the utility of such models in identifying dysregulation of gene and miRNA expression associated with abnormal patterns of transcription factor (TF) binding or nucleosomal histone modifications (HMs). Despite the growing popularity of such approaches, a comparative review of the various modelling algorithms and feature extraction methods is lacking. We define and compare three methods of quantifying pairwise gene-TF/HM interactions and discuss their suitability for integrating the heterogeneous chromatin immunoprecipitation (ChIP)-seq binding patterns exhibited by TFs and HMs. We then construct log-linear and ϵ-support vector regression models from various mouse embryonic stem cell (mESC) and human lymphoblastoid (GM12878) data sets, considering both ChIP-seq- and position weight matrix- (PWM)-derived in silico TF-binding. The two algorithms are evaluated both in terms of their modelling prediction accuracy and ability to identify the established regulatory roles of individual TFs and HMs. Our results demonstrate that TF-binding and HMs are highly predictive of gene expression as measured by mRNA transcript abundance, irrespective of algorithm or cell type selection and considering both ChIP-seq and PWM-derived TF-binding. As we encourage other researchers to explore and develop these results, our framework is implemented using open-source software and made available as a preconfigured bootable virtual environment. © The Author 2014. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Hybrid ATDL-gamma distribution model for predicting area source acid gas concentrations
Energy Technology Data Exchange (ETDEWEB)
Jakeman, A J; Taylor, J A
1985-01-01
An air quality model is developed to predict the distribution of concentrations of acid gas in an urban airshed. The model is hybrid in character, combining reliable features of a deterministic ATDL-based model with statistical distributional approaches. The gamma distribution was identified from a range of distributional models as the best model. The paper shows that the assumptions of a previous hybrid model may be relaxed and presents a methodology for characterizing the uncertainty associated with model predictions. Results are demonstrated for the 98-percentile predictions of 24-h average data over annual periods at six monitoring sites. This percentile relates to the World Health Organization goal for acid gas concentrations.
Finiteness results for Abelian tree models
Draisma, J.; Eggermont, R.H.
2015-01-01
Equivariant tree models are statistical models used in the reconstruction of phylogenetic trees from genetic data. Here equivariant refers to a symmetry group imposed on the root distribution and on the transition matrices in the model. We prove that if that symmetry group is Abelian, then the
Finiteness results for Abelian tree models
Draisma, J.; Eggermont, R.H.
2012-01-01
Equivariant tree models are statistical models used in the reconstruction of phylogenetic trees from genetic data. Here equivariant refers to a symmetry group imposed on the root distribution and on the transition matrices in the model. We prove that if that symmetry group is Abelian, then the
Finiteness results for Abelian tree models
Draisma, J.; Eggermont, R.H.
2015-01-01
Equivariant tree models are statistical models used in the reconstruction of phylogenetic trees from genetic data. Here equivariant§ refers to a symmetry group imposed on the root distribution and on the transition matrices in the model. We prove that if that symmetry group is Abelian, then the
Modeling of Complex Life Cycle Prediction Based on Cell Division
Directory of Open Access Journals (Sweden)
Fucheng Zhang
2017-01-01
Full Text Available Effective fault diagnosis and reasonable life expectancy are of great significance and practical engineering value for the safety, reliability, and maintenance cost of equipment and working environment. At present, the life prediction methods of the equipment are equipment life prediction based on condition monitoring, combined forecasting model, and driven data. Most of them need to be based on a large amount of data to achieve the problem. For this issue, we propose learning from the mechanism of cell division in the organism. We have established a moderate complexity of life prediction model across studying the complex multifactor correlation life model. In this paper, we model the life prediction of cell division. Experiments show that our model can effectively simulate the state of cell division. Through the model of reference, we will use it for the equipment of the complex life prediction.
Risk prediction model: Statistical and artificial neural network approach
Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim
2017-04-01
Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.
Precise predictions within the two-Higgs-doublet model
International Nuclear Information System (INIS)
Altenkamp, Lukas
2017-01-01
We consider the Two-Higgs-Doublet Model (THDM) where the Standard Model (SM) field content is extended by adding a further Higgs-boson doublet. This results in five Higgs bosons, two CP-even, one CP-odd and a charged Higgs boson and it's anti-particle. In order to provide accurate and reliable predictions within this model, next-to-leading order calculations are necessary. To this end, we perform a renormalization procedure and adopt four new renormalization schemes. The counterterm Feynman rules as well as the renormalization conditions are implemented into an FeynArts model file, yielding the possibility to generate amplitudes and squared matrix elements for arbitrary processes which is a major contribution to the automation of higher-order calculations. As an application we investigate the decay of a light, CP-even, SM-like Higgs boson into four fermions in the THDM. To this end, we extend the program Prophecy4f and compute the partial decay widths for different benchmark scenarios. For all investigated scenarios, we observe that the THDM widths are bounded by the SM widths and that the deviations are larger at higher order. The renormalization group equations have been solved in order to investigate the renormalization scale dependence which gives an estimate of the theoretical uncertainty arising due to the truncation of the perturbation series. By comparing the results of different renormalization schemes we determine for which parameter regions each scheme provides reliable predictions.
Precise predictions within the two-Higgs-doublet model
Energy Technology Data Exchange (ETDEWEB)
Altenkamp, Lukas
2017-02-21
We consider the Two-Higgs-Doublet Model (THDM) where the Standard Model (SM) field content is extended by adding a further Higgs-boson doublet. This results in five Higgs bosons, two CP-even, one CP-odd and a charged Higgs boson and it's anti-particle. In order to provide accurate and reliable predictions within this model, next-to-leading order calculations are necessary. To this end, we perform a renormalization procedure and adopt four new renormalization schemes. The counterterm Feynman rules as well as the renormalization conditions are implemented into an FeynArts model file, yielding the possibility to generate amplitudes and squared matrix elements for arbitrary processes which is a major contribution to the automation of higher-order calculations. As an application we investigate the decay of a light, CP-even, SM-like Higgs boson into four fermions in the THDM. To this end, we extend the program Prophecy4f and compute the partial decay widths for different benchmark scenarios. For all investigated scenarios, we observe that the THDM widths are bounded by the SM widths and that the deviations are larger at higher order. The renormalization group equations have been solved in order to investigate the renormalization scale dependence which gives an estimate of the theoretical uncertainty arising due to the truncation of the perturbation series. By comparing the results of different renormalization schemes we determine for which parameter regions each scheme provides reliable predictions.
Predictive modeling and reducing cyclic variability in autoignition engines
Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob
2016-08-30
Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.
Cloud-based Predictive Modeling System and its Application to Asthma Readmission Prediction
Chen, Robert; Su, Hang; Khalilia, Mohammed; Lin, Sizhe; Peng, Yue; Davis, Tod; Hirsh, Daniel A; Searles, Elizabeth; Tejedor-Sojo, Javier; Thompson, Michael; Sun, Jimeng
2015-01-01
The predictive modeling process is time consuming and requires clinical researchers to handle complex electronic health record (EHR) data in restricted computational environments. To address this problem, we implemented a cloud-based predictive modeling system via a hybrid setup combining a secure private server with the Amazon Web Services (AWS) Elastic MapReduce platform. EHR data is preprocessed on a private server and the resulting de-identified event sequences are hosted on AWS. Based on user-specified modeling configurations, an on-demand web service launches a cluster of Elastic Compute 2 (EC2) instances on AWS to perform feature selection and classification algorithms in a distributed fashion. Afterwards, the secure private server aggregates results and displays them via interactive visualization. We tested the system on a pediatric asthma readmission task on a de-identified EHR dataset of 2,967 patients. We conduct a larger scale experiment on the CMS Linkable 2008–2010 Medicare Data Entrepreneurs’ Synthetic Public Use File dataset of 2 million patients, which achieves over 25-fold speedup compared to sequential execution. PMID:26958172
Predicting Madura cattle growth curve using non-linear model
Widyas, N.; Prastowo, S.; Widi, T. S. M.; Baliarti, E.
2018-03-01
Madura cattle is Indonesian native. It is a composite breed that has undergone hundreds of years of selection and domestication to reach nowadays remarkable uniformity. Crossbreeding has reached the isle of Madura and the Madrasin, a cross between Madura cows and Limousine semen emerged. This paper aimed to compare the growth curve between Madrasin and one type of pure Madura cows, the common Madura cattle (Madura) using non-linear models. Madura cattles are kept traditionally thus reliable records are hardly available. Data were collected from small holder farmers in Madura. Cows from different age classes (5years) were observed, and body measurements (chest girth, body length and wither height) were taken. In total 63 Madura and 120 Madrasin records obtained. Linear model was built with cattle sub-populations and age as explanatory variables. Body weights were estimated based on the chest girth. Growth curves were built using logistic regression. Results showed that within the same age, Madrasin has significantly larger body compared to Madura (plogistic models fit better for Madura and Madrasin cattle data; with the estimated MSE for these models were 39.09 and 759.28 with prediction accuracy of 99 and 92% for Madura and Madrasin, respectively. Prediction of growth curve using logistic regression model performed well in both types of Madura cattle. However, attempts to administer accurate data on Madura cattle are necessary to better characterize and study these cattle.
Mathematical modeling and computational prediction of cancer drug resistance.
Sun, Xiaoqiang; Hu, Bin
2017-06-23
Diverse forms of resistance to anticancer drugs can lead to the failure of chemotherapy. Drug resistance is one of the most intractable issues for successfully treating cancer in current clinical practice. Effective clinical approaches that could counter drug resistance by restoring the sensitivity of tumors to the targeted agents are urgently needed. As numerous experimental results on resistance mechanisms have been obtained and a mass of high-throughput data has been accumulated, mathematical modeling and computational predictions using systematic and quantitative approaches have become increasingly important, as they can potentially provide deeper insights into resistance mechanisms, generate novel hypotheses or suggest promising treatment strategies for future testing. In this review, we first briefly summarize the current progress of experimentally revealed resistance mechanisms of targeted therapy, including genetic mechanisms, epigenetic mechanisms, posttranslational mechanisms, cellular mechanisms, microenvironmental mechanisms and pharmacokinetic mechanisms. Subsequently, we list several currently available databases and Web-based tools related to drug sensitivity and resistance. Then, we focus primarily on introducing some state-of-the-art computational methods used in drug resistance studies, including mechanism-based mathematical modeling approaches (e.g. molecular dynamics simulation, kinetic model of molecular networks, ordinary differential equation model of cellular dynamics, stochastic model, partial differential equation model, agent-based model, pharmacokinetic-pharmacodynamic model, etc.) and data-driven prediction methods (e.g. omics data-based conventional screening approach for node biomarkers, static network approach for edge biomarkers and module biomarkers, dynamic network approach for dynamic network biomarkers and dynamic module network biomarkers, etc.). Finally, we discuss several further questions and future directions for the use of
Comparative Evaluation of Some Crop Yield Prediction Models ...
African Journals Online (AJOL)
A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...
Ocean wave prediction using numerical and neural network models
Digital Repository Service at National Institute of Oceanography (India)
Mandal, S.; Prabaharan, N.
This paper presents an overview of the development of the numerical wave prediction models and recently used neural networks for ocean wave hindcasting and forecasting. The numerical wave models express the physical concepts of the phenomena...
Immersive visualization of dynamic CFD model results
International Nuclear Information System (INIS)
Comparato, J.R.; Ringel, K.L.; Heath, D.J.
2004-01-01
With immersive visualization the engineer has the means for vividly understanding problem causes and discovering opportunities to improve design. Software can generate an interactive world in which collaborators experience the results of complex mathematical simulations such as computational fluid dynamic (CFD) modeling. Such software, while providing unique benefits over traditional visualization techniques, presents special development challenges. The visualization of large quantities of data interactively requires both significant computational power and shrewd data management. On the computational front, commodity hardware is outperforming large workstations in graphical quality and frame rates. Also, 64-bit commodity computing shows promise in enabling interactive visualization of large datasets. Initial interactive transient visualization methods and examples are presented, as well as development trends in commodity hardware and clustering. Interactive, immersive visualization relies on relevant data being stored in active memory for fast response to user requests. For large or transient datasets, data management becomes a key issue. Techniques for dynamic data loading and data reduction are presented as means to increase visualization performance. (author)
Minetti, A E; Ardigò, L P; Susta, D; Cotelli, F
1998-12-01
The use of muscles as power dissipators is investigated in this study, both from the modellistic and the experimental points of view. Theoretical predictions of the drop landing manoeuvre for a range of initial conditions have been obtained by accounting for the mechanical characteristics of knee extensor muscles, the limb geometry and assuming maximum neural activation. Resulting dynamics have been represented in the phase plane (vertical displacement versus speed) to better classify the damping performance. Predictions of safe landing in sedentary subjects were associated to dropping from a maximum (feet) height of 1.6-2.0 m (about 11 m on the moon). Athletes can extend up to 2.6-3.0 m, while for obese males (m = 100 kg, standard stature) the limit should reduce to 0.9-1.3 m. These results have been calculated by including in the model the estimated stiffness of the 'global elastic elements' acting below the squat position. Experimental landings from a height of 0.4, 0.7, 1.1 m (sedentary males (SM) and male (AM) and female (AF) athletes from the alpine ski national team) showed dynamics similar to the model predictions. While the peak power (for a drop height of about 0.7 m) was similar in SM and AF (AM shows a +40% increase, about 33 W/kg), AF stopped the downward movement after a time interval (0.219 +/- 0.030 s) from touch-down 20% significantly shorter than SM. Landing strategy and the effect of anatomical constraints are discussed in the paper.
Model for predicting the injury severity score.
Hagiwara, Shuichi; Oshima, Kiyohiro; Murata, Masato; Kaneko, Minoru; Aoki, Makoto; Kanbe, Masahiko; Nakamura, Takuro; Ohyama, Yoshio; Tamura, Jun'ichi
2015-07-01
To determine the formula that predicts the injury severity score from parameters that are obtained in the emergency department at arrival. We reviewed the medical records of trauma patients who were transferred to the emergency department of Gunma University Hospital between January 2010 and December 2010. The injury severity score, age, mean blood pressure, heart rate, Glasgow coma scale, hemoglobin, hematocrit, red blood cell count, platelet count, fibrinogen, international normalized ratio of prothrombin time, activated partial thromboplastin time, and fibrin degradation products, were examined in those patients on arrival. To determine the formula that predicts the injury severity score, multiple linear regression analysis was carried out. The injury severity score was set as the dependent variable, and the other parameters were set as candidate objective variables. IBM spss Statistics 20 was used for the statistical analysis. Statistical significance was set at P Watson ratio was 2.200. A formula for predicting the injury severity score in trauma patients was developed with ordinary parameters such as fibrin degradation products and mean blood pressure. This formula is useful because we can predict the injury severity score easily in the emergency department.
Predicting Career Advancement with Structural Equation Modelling
Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia
2012-01-01
Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…
Kennedy, Curtis E; Turley, James P
2011-10-24
) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting.
Model-predictive control based on Takagi-Sugeno fuzzy model for electrical vehicles delayed model
DEFF Research Database (Denmark)
Khooban, Mohammad-Hassan; Vafamand, Navid; Niknam, Taher
2017-01-01
Electric vehicles (EVs) play a significant role in different applications, such as commuter vehicles and short distance transport applications. This study presents a new structure of model-predictive control based on the Takagi-Sugeno fuzzy model, linear matrix inequalities, and a non......-quadratic Lyapunov function for the speed control of EVs including time-delay states and parameter uncertainty. Experimental data, using the Federal Test Procedure (FTP-75), is applied to test the performance and robustness of the suggested controller in the presence of time-varying parameters. Besides, a comparison...... is made between the results of the suggested robust strategy and those obtained from some of the most recent studies on the same topic, to assess the efficiency of the suggested controller. Finally, the experimental results based on a TMS320F28335 DSP are performed on a direct current motor. Simulation...
Statistical model based gender prediction for targeted NGS clinical panels
Directory of Open Access Journals (Sweden)
Palani Kannan Kandavel
2017-12-01
The reference test dataset are being used to test the model. The sensitivity on predicting the gender has been increased from the current “genotype composition in ChrX” based approach. In addition, the prediction score given by the model can be used to evaluate the quality of clinical dataset. The higher prediction score towards its respective gender indicates the higher quality of sequenced data.
On Practical tuning of Model Uncertainty in Wind Turbine Model Predictive Control
DEFF Research Database (Denmark)
Odgaard, Peter Fogh; Hovgaard, Tobias
2015-01-01
Model predictive control (MPC) has in previous works been applied on wind turbines with promising results. These results apply linear MPC, i.e., linear models linearized at different operational points depending on the wind speed. The linearized models are derived from a nonlinear first principles...... model of a wind turbine. In this paper, we investigate the impact of this approach on the performance of a wind turbine. In particular, we focus on the most non-linear operational ranges of a wind turbine. The MPC controller is designed for, tested, and evaluated at an industrial high fidelity wind...
A predictive pilot model for STOL aircraft landing
Kleinman, D. L.; Killingsworth, W. R.
1974-01-01
An optimal control approach has been used to model pilot performance during STOL flare and landing. The model is used to predict pilot landing performance for three STOL configurations, each having a different level of automatic control augmentation. Model predictions are compared with flight simulator data. It is concluded that the model can be effective design tool for studying analytically the effects of display modifications, different stability augmentation systems, and proposed changes in the landing area geometry.
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
Model-based uncertainty in species range prediction
DEFF Research Database (Denmark)
Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel
2006-01-01
Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...
Wind turbine control and model predictive control for uncertain systems
DEFF Research Database (Denmark)
Thomsen, Sven Creutz
as disturbance models for controller design. The theoretical study deals with Model Predictive Control (MPC). MPC is an optimal control method which is characterized by the use of a receding prediction horizon. MPC has risen in popularity due to its inherent ability to systematically account for time...
Testing and analysis of internal hardwood log defect prediction models
R. Edward Thomas
2011-01-01
The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...
Models for predicting fuel consumption in sagebrush-dominated ecosystems
Clinton S. Wright
2013-01-01
Fuel consumption predictions are necessary to accurately estimate or model fire effects, including pollutant emissions during wildland fires. Fuel and environmental measurements on a series of operational prescribed fires were used to develop empirical models for predicting fuel consumption in big sagebrush (Artemisia tridentate Nutt.) ecosystems....
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of
A new, accurate predictive model for incident hypertension
DEFF Research Database (Denmark)
Völzke, Henry; Fung, Glenn; Ittermann, Till
2013-01-01
Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....
Prediction models for successful external cephalic version: a systematic review
Velzel, Joost; de Hundt, Marcella; Mulder, Frederique M.; Molkenboer, Jan F. M.; van der Post, Joris A. M.; Mol, Ben W.; Kok, Marjolein
2015-01-01
To provide an overview of existing prediction models for successful ECV, and to assess their quality, development and performance. We searched MEDLINE, EMBASE and the Cochrane Library to identify all articles reporting on prediction models for successful ECV published from inception to January 2015.
Hidden Markov Model for quantitative prediction of snowfall
Indian Academy of Sciences (India)
A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...
Mathematical model for dissolved oxygen prediction in Cirata ...
African Journals Online (AJOL)
This paper presents the implementation and performance of mathematical model to predict theconcentration of dissolved oxygen in Cirata Reservoir, West Java by using Artificial Neural Network (ANN). The simulation program was created using Visual Studio 2012 C# software with ANN model implemented in it. Prediction ...
Linkage of PRA models. Phase 1, Results
Energy Technology Data Exchange (ETDEWEB)
Smith, C.L.; Knudsen, J.K.; Kelly, D.L.
1995-12-01
The goal of the Phase I work of the ``Linkage of PRA Models`` project was to postulate methods of providing guidance for US Nuclear Regulator Commission (NRC) personnel on the selection and usage of probabilistic risk assessment (PRA) models that are best suited to the analysis they are performing. In particular, methods and associated features are provided for (a) the selection of an appropriate PRA model for a particular analysis, (b) complementary evaluation tools for the analysis, and (c) a PRA model cross-referencing method. As part of this work, three areas adjoining ``linking`` analyses to PRA models were investigated: (a) the PRA models that are currently available, (b) the various types of analyses that are performed within the NRC, and (c) the difficulty in trying to provide a ``generic`` classification scheme to groups plants based upon a particular plant attribute.
Linkage of PRA models. Phase 1, Results
International Nuclear Information System (INIS)
Smith, C.L.; Knudsen, J.K.; Kelly, D.L.
1995-12-01
The goal of the Phase I work of the ''Linkage of PRA Models'' project was to postulate methods of providing guidance for US Nuclear Regulator Commission (NRC) personnel on the selection and usage of probabilistic risk assessment (PRA) models that are best suited to the analysis they are performing. In particular, methods and associated features are provided for (a) the selection of an appropriate PRA model for a particular analysis, (b) complementary evaluation tools for the analysis, and (c) a PRA model cross-referencing method. As part of this work, three areas adjoining ''linking'' analyses to PRA models were investigated: (a) the PRA models that are currently available, (b) the various types of analyses that are performed within the NRC, and (c) the difficulty in trying to provide a ''generic'' classification scheme to groups plants based upon a particular plant attribute
Nonlinear dynamical modeling and prediction of the terrestrial magnetospheric activity
International Nuclear Information System (INIS)
Vassiliadis, D.
1992-01-01
The irregular activity of the magnetosphere results from its complex internal dynamics as well as the external influence of the solar wind. The dominating self-organization of the magnetospheric plasma gives rise to repetitive, large-scale coherent behavior manifested in phenomena such as the magnetic substorm. Based on the nonlinearity of the global dynamics this dissertation examines the magnetosphere as a nonlinear dynamical system using time series analysis techniques. Initially the magnetospheric activity is modeled in terms of an autonomous system. A dimension study shows that its observed time series is self-similar, but the correlation dimension is high. The implication of a large number of degrees of freedom is confirmed by other state space techniques such as Poincare sections and search for unstable periodic orbits. At the same time a stability study of the time series in terms of Lyapunov exponents suggests that the series is not chaotic. The absence of deterministic chaos is supported by the low predictive capability of the autonomous model. Rather than chaos, it is an external input which is largely responsible for the irregularity of the magnetospheric activity. In fact, the external driving is so strong that the above state space techniques give results for magnetospheric and solar wind time series that are at least qualitatively similar. Therefore the solar wind input has to be included in a low-dimensional nonautonomous model. Indeed it is shown that such a model can reproduce the observed magnetospheric behavior up to 80-90 percent. The characteristic coefficients of the model show little variation depending on the external disturbance. The impulse response is consistent with earlier results of linear prediction filters. The model can be easily extended to contain nonlinear features of the magnetospheric activity and in particular the loading-unloading behavior of substorms
CIEMAT model results for Esthwaite Water
International Nuclear Information System (INIS)
Aguero, A.; Garcia-Olivares, A.
2000-01-01
This study used the transfer model PRYMA-LO, developed by CIEMAT-IMA, Madrid, Spain, to simulate the transfer of Cs-137 in watershed scenarios. The main processes considered by the model include: transfer of the fallout to the ground, incorporation of the fallout radioisotopes into the water flow, and their removal from the system. The model was tested against observation data obtained in water and sediments of Esthwaite Water, Lake District, UK. This comparison made it possible to calibrate the parameters of the model to the specific scenario
Preliminary results of steel containment vessel model test
International Nuclear Information System (INIS)
Matsumoto, T.; Komine, K.; Arai, S.
1997-01-01
A high pressure test of a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of a steel containment vessel (SCV), representing an improved boiling water reactor (BWR) Mark II containment, was conducted on December 11-12, 1996 at Sandia National Laboratories. This paper describes the preliminary results of the high pressure test. In addition, the preliminary post-test measurement data and the preliminary comparison of test data with pretest analysis predictions are also presented
Qualitative and quantitative guidelines for the comparison of environmental model predictions
International Nuclear Information System (INIS)
Scott, M.
1995-03-01
The question of how to assess or compare predictions from a number of models is one of concern in the validation of models, in understanding the effects of different models and model parameterizations on model output, and ultimately in assessing model reliability. Comparison of model predictions with observed data is the basic tool of model validation while comparison of predictions amongst different models provides one measure of model credibility. The guidance provided here is intended to provide qualitative and quantitative approaches (including graphical and statistical techniques) to such comparisons for use within the BIOMOVS II project. It is hoped that others may find it useful. It contains little technical information on the actual methods but several references are provided for the interested reader. The guidelines are illustrated on data from the VAMP CB scenario. Unfortunately, these data do not permit all of the possible approaches to be demonstrated since predicted uncertainties were not provided. The questions considered are concerned with a) intercomparison of model predictions and b) comparison of model predictions with the observed data. A series of examples illustrating some of the different types of data structure and some possible analyses have been constructed. A bibliography of references on model validation is provided. It is important to note that the results of the various techniques discussed here, whether qualitative or quantitative, should not be considered in isolation. Overall model performance must also include an evaluation of model structure and formulation, i.e. conceptual model uncertainties, and results for performance measures must be interpreted in this context. Consider a number of models which are used to provide predictions of a number of quantities at a number of time points. In the case of the VAMP CB scenario, the results include predictions of total deposition of Cs-137 and time dependent concentrations in various
NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS
Directory of Open Access Journals (Sweden)
Jiří Štefanica
2014-02-01
Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.
Analytical model for local scour prediction around hydrokinetic turbine foundations
Musa, M.; Heisel, M.; Hill, C.; Guala, M.
2017-12-01
Marine and Hydrokinetic renewable energy is an emerging sustainable and secure technology which produces clean energy harnessing water currents from mostly tidal and fluvial waterways. Hydrokinetic turbines are typically anchored at the bottom of the channel, which can be erodible or non-erodible. Recent experiments demonstrated the interactions between operating turbines and an erodible surface with sediment transport, resulting in a remarkable localized erosion-deposition pattern significantly larger than those observed by static in-river construction such as bridge piers, etc. Predicting local scour geometry at the base of hydrokinetic devices is extremely important during foundation design, installation, operation, and maintenance (IO&M), and long-term structural integrity. An analytical modeling framework is proposed applying the phenomenological theory of turbulence to the flow structures that promote the scouring process at the base of a turbine. The evolution of scour is directly linked to device operating conditions through the turbine drag force, which is inferred to locally dictate the energy dissipation rate in the scour region. The predictive model is validated using experimental data obtained at the University of Minnesota's St. Anthony Falls Laboratory (SAFL), covering two sediment mobility regimes (clear water and live bed), different turbine designs, hydraulic parameters, grain size distribution and bedform types. The model is applied to a potential prototype scale deployment in the lower Mississippi River, demonstrating its practical relevance and endorsing the feasibility of hydrokinetic energy power plants in large sandy rivers. Multi-turbine deployments are further studied experimentally by monitoring both local and non-local geomorphic effects introduced by a twelve turbine staggered array model installed in a wide channel at SAFL. Local scour behind each turbine is well captured by the theoretical predictive model. However, multi
Multivariate Models for Prediction of Human Skin Sensitization Hazard
Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M.; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole
2016-01-01
One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays—the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT), and KeratinoSens™ assay—six physicochemical properties, and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression (LR) and support vector machine (SVM), to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three LR and three SVM) with the highest accuracy (92%) used: (1) DPRA, h-CLAT, and read-across; (2) DPRA, h-CLAT, read-across, and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens, and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy = 88%), any of the alternative methods alone (accuracy = 63–79%), or test batteries combining data from the individual methods (accuracy = 75%). These results suggest that computational methods are promising tools to effectively identify potential human skin sensitizers without animal testing. PMID:27480324
Modeling and predicting historical volatility in exchange rate markets
Lahmiri, Salim
2017-04-01
Volatility modeling and forecasting of currency exchange rate is an important task in several business risk management tasks; including treasury risk management, derivatives pricing, and portfolio risk evaluation. The purpose of this study is to present a simple and effective approach for predicting historical volatility of currency exchange rate. The approach is based on a limited set of technical indicators as inputs to the artificial neural networks (ANN). To show the effectiveness of the proposed approach, it was applied to forecast US/Canada and US/Euro exchange rates volatilities. The forecasting results show that our simple approach outperformed the conventional GARCH and EGARCH with different distribution assumptions, and also the hybrid GARCH and EGARCH with ANN in terms of mean absolute error, mean of squared errors, and Theil's inequality coefficient. Because of the simplicity and effectiveness of the approach, it is promising for US currency volatility prediction tasks.
Directory of Open Access Journals (Sweden)
Mihaela Simionescu
2014-12-01
Full Text Available There are many types of econometric models used in predicting the inflation rate, but in this study we used a Bayesian shrinkage combination approach. This methodology is used in order to improve the predictions accuracy by including information that is not captured by the econometric models. Therefore, experts’ forecasts are utilized as prior information, for Romania these predictions being provided by Institute for Economic Forecasting (Dobrescu macromodel, National Commission for Prognosis and European Commission. The empirical results for Romanian inflation show the superiority of a fixed effects model compared to other types of econometric models like VAR, Bayesian VAR, simultaneous equations model, dynamic model, log-linear model. The Bayesian combinations that used experts’ predictions as priors, when the shrinkage parameter tends to infinite, improved the accuracy of all forecasts based on individual models, outperforming also zero and equal weights predictions and naïve forecasts.
Modeling and prediction of Turkey's electricity consumption using Support Vector Regression
International Nuclear Information System (INIS)
Kavaklioglu, Kadir
2011-01-01
Support Vector Regression (SVR) methodology is used to model and predict Turkey's electricity consumption. Among various SVR formalisms, ε-SVR method was used since the training pattern set was relatively small. Electricity consumption is modeled as a function of socio-economic indicators such as population, Gross National Product, imports and exports. In order to facilitate future predictions of electricity consumption, a separate SVR model was created for each of the input variables using their current and past values; and these models were combined to yield consumption prediction values. A grid search for the model parameters was performed to find the best ε-SVR model for each variable based on Root Mean Square Error. Electricity consumption of Turkey is predicted until 2026 using data from 1975 to 2006. The results show that electricity consumption can be modeled using Support Vector Regression and the models can be used to predict future electricity consumption. (author)
[Application of ARIMA model on prediction of malaria incidence].
Jing, Xia; Hua-Xun, Zhang; Wen, Lin; Su-Jian, Pei; Ling-Cong, Sun; Xiao-Rong, Dong; Mu-Min, Cao; Dong-Ni, Wu; Shunxiang, Cai
2016-01-29
To predict the incidence of local malaria of Hubei Province applying the Autoregressive Integrated Moving Average model (ARIMA). SPSS 13.0 software was applied to construct the ARIMA model based on the monthly local malaria incidence in Hubei Province from 2004 to 2009. The local malaria incidence data of 2010 were used for model validation and evaluation. The model of ARIMA (1, 1, 1) (1, 1, 0) 12 was tested as relatively the best optimal with the AIC of 76.085 and SBC of 84.395. All the actual incidence data were in the range of 95% CI of predicted value of the model. The prediction effect of the model was acceptable. The ARIMA model could effectively fit and predict the incidence of local malaria of Hubei Province.
Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli
2017-11-01
The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.
Mobility Modelling through Trajectory Decomposition and Prediction
Faghihi, Farbod
2017-01-01
The ubiquity of mobile devices with positioning sensors make it possible to derive user's location at any time. However, constantly sensing the position in order to track the user's movement is not feasible, either due to the unavailability of sensors, or computational and storage burdens. In this thesis, we present and evaluate a novel approach for efficiently tracking user's movement trajectories using decomposition and prediction of trajectories. We facilitate tracking by taking advantage ...
Modelling earth current precursors in earthquake prediction
Directory of Open Access Journals (Sweden)
R. Di Maio
1997-06-01
Full Text Available This paper deals with the theory of earth current precursors of earthquake. A dilatancy-diffusion-polarization model is proposed to explain the anomalies of the electric potential, which are observed on the ground surface prior to some earthquakes. The electric polarization is believed to be the electrokinetic effect due to the invasion of fluids into new pores, which are opened inside a stressed-dilated rock body. The time and space variation of the distribution of the electric potential in a layered earth as well as in a faulted half-space is studied in detail. It results that the surface response depends on the underground conductivity distribution and on the relative disposition of the measuring dipole with respect to the buried bipole source. A field procedure based on the use of an areal layout of the recording sites is proposed, in order to obtain the most complete information on the time and space evolution of the precursory phenomena in any given seismic region.