WorldWideScience

Sample records for models predict performance

  1. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  2. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  3. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  4. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  5. Numerical modeling capabilities to predict repository performance

    International Nuclear Information System (INIS)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used

  6. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  7. Assessment of performance of survival prediction models for cancer prognosis

    Directory of Open Access Journals (Sweden)

    Chen Hung-Chia

    2012-07-01

    Full Text Available Abstract Background Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient’s class. Although this approach has several drawbacks, it does provide natural performance metrics such as positive and negative predictive values to enable unambiguous assessments. Methods We compare the survival-time prediction and survival-time threshold approaches to analyzing cancer survival studies. We review and compare common performance metrics for the two approaches. We present new randomization tests and cross-validation methods to enable unambiguous statistical inferences for several performance metrics used with the survival-time prediction approach. We consider five survival prediction models consisting of one clinical model, two gene expression models, and two models from combinations of clinical and gene expression models. Results A public breast cancer dataset was used to compare several performance metrics using five prediction models. 1 For some prediction models, the hazard ratio from fitting a Cox proportional hazards model was significant, but the two-group comparison was insignificant, and vice versa. 2 The randomization test and cross-validation were generally consistent with the p-values obtained from the standard performance metrics. 3 Binary classifiers highly depended on how the risk groups were defined; a slight change of the survival threshold for assignment of classes led to very different prediction results. Conclusions 1 Different performance metrics for evaluation of a survival prediction model may give different conclusions in

  8. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  9. The predictive performance and stability of six species distribution models.

    Science.gov (United States)

    Duan, Ren-Yan; Kong, Xiao-Quan; Huang, Min-Yi; Fan, Wei-Yi; Wang, Zhi-Gao

    2014-01-01

    Predicting species' potential geographical range by species distribution models (SDMs) is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs. We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis) and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials). We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values. The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (pSDMs (MAHAL, RF, MAXENT, and SVM) had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points). According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  10. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    Science.gov (United States)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  11. The predictive performance and stability of six species distribution models.

    Directory of Open Access Journals (Sweden)

    Ren-Yan Duan

    Full Text Available Predicting species' potential geographical range by species distribution models (SDMs is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs.We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials. We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values.The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (p<0.05, while the associated standard deviations and coefficients of variation were larger for BIOCLIM and DOMAIN trials (p<0.05, and the 99% confidence intervals for AUC and Kappa values were narrower for MAHAL, RF, MAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points.According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  12. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  13. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    Science.gov (United States)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This paper presents recent thermal model results of the Advanced Stirling Radioisotope Generator (ASRG). The three-dimensional (3D) ASRG thermal power model was built using the Thermal Desktop(trademark) thermal analyzer. The model was correlated with ASRG engineering unit test data and ASRG flight unit predictions from Lockheed Martin's (LM's) I-deas(trademark) TMG thermal model. The auxiliary cooling system (ACS) of the ASRG is also included in the ASRG thermal model. The ACS is designed to remove waste heat from the ASRG so that it can be used to heat spacecraft components. The performance of the ACS is reported under nominal conditions and during a Venus flyby scenario. The results for the nominal case are validated with data from Lockheed Martin. Transient thermal analysis results of ASRG for a Venus flyby with a representative trajectory are also presented. In addition, model results of an ASRG mounted on a Cassini-like spacecraft with a sunshade are presented to show a way to mitigate the high temperatures of a Venus flyby. It was predicted that the sunshade can lower the temperature of the ASRG alternator by 20 C for the representative Venus flyby trajectory. The 3D model also was modified to predict generator performance after a single Advanced Stirling Convertor failure. The geometry of the Microtherm HT insulation block on the outboard side was modified to match deformation and shrinkage observed during testing of a prototypic ASRG test fixture by LM. Test conditions and test data were used to correlate the model by adjusting the thermal conductivity of the deformed insulation to match the post-heat-dump steady state temperatures. Results for these conditions showed that the performance of the still-functioning inboard ACS was unaffected.

  14. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... implementation consisting of a distributed PI controller structure, both in terms of minimising the overall cost but also in terms of the ability to minimise deviation, which is the classical objective....

  15. Decline curve based models for predicting natural gas well performance

    Directory of Open Access Journals (Sweden)

    Arash Kamari

    2017-06-01

    Full Text Available The productivity of a gas well declines over its production life as cannot cover economic policies. To overcome such problems, the production performance of gas wells should be predicted by applying reliable methods to analyse the decline trend. Therefore, reliable models are developed in this study on the basis of powerful artificial intelligence techniques viz. the artificial neural network (ANN modelling strategy, least square support vector machine (LSSVM approach, adaptive neuro-fuzzy inference system (ANFIS, and decision tree (DT method for the prediction of cumulative gas production as well as initial decline rate multiplied by time as a function of the Arps' decline curve exponent and ratio of initial gas flow rate over total gas flow rate. It was concluded that the results obtained based on the models developed in current study are in satisfactory agreement with the actual gas well production data. Furthermore, the results of comparative study performed demonstrates that the LSSVM strategy is superior to the other models investigated for the prediction of both cumulative gas production, and initial decline rate multiplied by time.

  16. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    constructed from geological and hydrological data. However, geophysical data are increasingly used to inform hydrogeologic models because they are collected at lower cost and much higher density than geological and hydrological data. Despite increased use of geophysics, it is still unclear whether...... the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... collecting geophysical data. At a minimum, an analysis should be conducted assuming settings that are favorable for the chosen geophysical method. If the analysis suggests that data collected by the geophysical method is unlikely to improve model prediction performance under these favorable settings...

  17. Fuzzy regression modeling for tool performance prediction and degradation detection.

    Science.gov (United States)

    Li, X; Er, M J; Lim, B S; Zhou, J H; Gan, O P; Rutkowski, L

    2010-10-01

    In this paper, the viability of using Fuzzy-Rule-Based Regression Modeling (FRM) algorithm for tool performance and degradation detection is investigated. The FRM is developed based on a multi-layered fuzzy-rule-based hybrid system with Multiple Regression Models (MRM) embedded into a fuzzy logic inference engine that employs Self Organizing Maps (SOM) for clustering. The FRM converts a complex nonlinear problem to a simplified linear format in order to further increase the accuracy in prediction and rate of convergence. The efficacy of the proposed FRM is tested through a case study - namely to predict the remaining useful life of a ball nose milling cutter during a dry machining process of hardened tool steel with a hardness of 52-54 HRc. A comparative study is further made between four predictive models using the same set of experimental data. It is shown that the FRM is superior as compared with conventional MRM, Back Propagation Neural Networks (BPNN) and Radial Basis Function Networks (RBFN) in terms of prediction accuracy and learning speed.

  18. Maintenance personnel performance simulation (MAPPS): a model for predicting maintenance performance reliability in nuclear power plants

    International Nuclear Information System (INIS)

    Knee, H.E.; Krois, P.A.; Haas, P.M.; Siegel, A.I.; Ryan, T.G.

    1983-01-01

    The NRC has developed a structured, quantitative, predictive methodology in the form of a computerized simulation model for assessing maintainer task performance. Objective of the overall program is to develop, validate, and disseminate a practical, useful, and acceptable methodology for the quantitative assessment of NPP maintenance personnel reliability. The program was organized into four phases: (1) scoping study, (2) model development, (3) model evaluation, and (4) model dissemination. The program is currently nearing completion of Phase 2 - Model Development

  19. Performance prediction model for distributed applications on multicore clusters

    CSIR Research Space (South Africa)

    Khanyile, NP

    2012-07-01

    Full Text Available Distributed processing offers a way of successfully dealing with computationally demanding applications such as scientific problems. Over the years, researchers have investigated ways to predict the performance of parallel algorithms. Amdahl’s law...

  20. A unified tool for performance modelling and prediction

    International Nuclear Information System (INIS)

    Gilmore, Stephen; Kloul, Leila

    2005-01-01

    We describe a novel performability modelling approach, which facilitates the efficient solution of performance models extracted from high-level descriptions of systems. The notation which we use for our high-level designs is the Unified Modelling Language (UML) graphical modelling language. The technology which provides the efficient representation capability for the underlying performance model is the multi-terminal binary decision diagram (MTBDD)-based PRISM probabilistic model checker. The UML models are compiled through an intermediate language, the stochastic process algebra PEPA, before translation into MTBDDs for solution. We illustrate our approach on a real-world analysis problem from the domain of mobile telephony

  1. In Silico Modeling of Gastrointestinal Drug Absorption: Predictive Performance of Three Physiologically Based Absorption Models.

    Science.gov (United States)

    Sjögren, Erik; Thörn, Helena; Tannergren, Christer

    2016-06-06

    Gastrointestinal (GI) drug absorption is a complex process determined by formulation, physicochemical and biopharmaceutical factors, and GI physiology. Physiologically based in silico absorption models have emerged as a widely used and promising supplement to traditional in vitro assays and preclinical in vivo studies. However, there remains a lack of comparative studies between different models. The aim of this study was to explore the strengths and limitations of the in silico absorption models Simcyp 13.1, GastroPlus 8.0, and GI-Sim 4.1, with respect to their performance in predicting human intestinal drug absorption. This was achieved by adopting an a priori modeling approach and using well-defined input data for 12 drugs associated with incomplete GI absorption and related challenges in predicting the extent of absorption. This approach better mimics the real situation during formulation development where predictive in silico models would be beneficial. Plasma concentration-time profiles for 44 oral drug administrations were calculated by convolution of model-predicted absorption-time profiles and reported pharmacokinetic parameters. Model performance was evaluated by comparing the predicted plasma concentration-time profiles, Cmax, tmax, and exposure (AUC) with observations from clinical studies. The overall prediction accuracies for AUC, given as the absolute average fold error (AAFE) values, were 2.2, 1.6, and 1.3 for Simcyp, GastroPlus, and GI-Sim, respectively. The corresponding AAFE values for Cmax were 2.2, 1.6, and 1.3, respectively, and those for tmax were 1.7, 1.5, and 1.4, respectively. Simcyp was associated with underprediction of AUC and Cmax; the accuracy decreased with decreasing predicted fabs. A tendency for underprediction was also observed for GastroPlus, but there was no correlation with predicted fabs. There were no obvious trends for over- or underprediction for GI-Sim. The models performed similarly in capturing dependencies on dose and

  2. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    Science.gov (United States)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This presentation describes the capabilities of three-dimensional thermal power model of advanced stirling radioisotope generator (ASRG). The performance of the ASRG is presented for different scenario, such as Venus flyby with or without the auxiliary cooling system.

  3. Competency-Based Model for Predicting Construction Project Managers Performance

    OpenAIRE

    Dainty, A. R. J.; Cheng, M.; Moore, D. R.

    2005-01-01

    Using behavioral competencies to influence human resource management decisions is gaining popularity in business organizations. This study identifies the core competencies associated with the construction management role and further, develops a predictive model to inform human resource selection and development decisions within large construction organizations. A range of construction managers took part in behavioral event interviews where staffs were asked to recount critical management inci...

  4. Predicting the Impacts of Intravehicular Displays on Driving Performance with Human Performance Modeling

    Science.gov (United States)

    Mitchell, Diane Kuhl; Wojciechowski, Josephine; Samms, Charneta

    2012-01-01

    A challenge facing the U.S. National Highway Traffic Safety Administration (NHTSA), as well as international safety experts, is the need to educate car drivers about the dangers associated with performing distraction tasks while driving. Researchers working for the U.S. Army Research Laboratory have developed a technique for predicting the increase in mental workload that results when distraction tasks are combined with driving. They implement this technique using human performance modeling. They have predicted workload associated with driving combined with cell phone use. In addition, they have predicted the workload associated with driving military vehicles combined with threat detection. Their technique can be used by safety personnel internationally to demonstrate the dangers of combining distracter tasks with driving and to mitigate the safety risks.

  5. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    Science.gov (United States)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  6. Simplified Predictive Models for CO2 Sequestration Performance Assessment

    Science.gov (United States)

    Mishra, Srikanta; RaviGanesh, Priya; Schuetter, Jared; Mooney, Douglas; He, Jincong; Durlofsky, Louis

    2014-05-01

    We present results from an ongoing research project that seeks to develop and validate a portfolio of simplified modeling approaches that will enable rapid feasibility and risk assessment for CO2 sequestration in deep saline formation. The overall research goal is to provide tools for predicting: (a) injection well and formation pressure buildup, and (b) lateral and vertical CO2 plume migration. Simplified modeling approaches that are being developed in this research fall under three categories: (1) Simplified physics-based modeling (SPM), where only the most relevant physical processes are modeled, (2) Statistical-learning based modeling (SLM), where the simulator is replaced with a "response surface", and (3) Reduced-order method based modeling (RMM), where mathematical approximations reduce the computational burden. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. In the first category (SPM), we use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. In the second category (SLM), we develop statistical "proxy models" using the simulation domain described previously with two different approaches: (a) classical Box-Behnken experimental design with a quadratic response surface fit, and (b) maximin Latin Hypercube sampling (LHS) based design with a Kriging metamodel fit using a quadratic trend and Gaussian correlation structure. For roughly the same number of

  7. Using dynamical uncertainty models estimating uncertainty bounds on power plant performance prediction

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob; Mataji, B.

    2007-01-01

    of the prediction error. These proposed dynamical uncertainty models result in an upper and lower bound on the predicted performance of the plant. The dynamical uncertainty models are used to estimate the uncertainty of the predicted performance of a coal-fired power plant. The proposed scheme, which uses dynamical...

  8. Performance prediction of industrial centrifuges using scale-down models.

    Science.gov (United States)

    Boychyn, M; Yim, S S S; Bulmer, M; More, J; Bracewell, D G; Hoare, M

    2004-12-01

    Computational fluid dynamics was used to model the high flow forces found in the feed zone of a multichamber-bowl centrifuge and reproduce these in a small, high-speed rotating disc device. Linking the device to scale-down centrifugation, permitted good estimation of the performance of various continuous-flow centrifuges (disc stack, multichamber bowl, CARR Powerfuge) for shear-sensitive protein precipitates. Critically, the ultra scale-down centrifugation process proved to be a much more accurate predictor of production multichamber-bowl performance than was the pilot centrifuge.

  9. Modelling and Predicting Backstroke Start Performance Using Non-Linear And Linear Models

    Directory of Open Access Journals (Sweden)

    de Jesus Karla

    2018-03-01

    Full Text Available Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%. Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19% and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30% using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s and vertical handgrip (0.01 vs. 0.03 s. Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  10. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

    Science.gov (United States)

    de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo

    2018-03-01

    Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  11. Predicting Student Academic Performance in an Engineering Dynamics Course: A Comparison of Four Types of Predictive Mathematical Models

    Science.gov (United States)

    Huang, Shaobo; Fang, Ning

    2013-01-01

    Predicting student academic performance has long been an important research topic in many academic disciplines. The present study is the first study that develops and compares four types of mathematical models to predict student academic performance in engineering dynamics--a high-enrollment, high-impact, and core course that many engineering…

  12. Prediction of Human Glomerular Filtration Rate from Preterm Neonates to Adults: Evaluation of Predictive Performance of Several Empirical Models.

    Science.gov (United States)

    Mahmood, Iftekhar; Staschen, Carl-Michael

    2016-03-01

    The objective of this study was to evaluate the predictive performance of several allometric empirical models (body weight dependent, age dependent, fixed exponent 0.75, a data-dependent single exponent, and maturation models) to predict glomerular filtration rate (GFR) in preterm and term neonates, infants, children, and adults without any renal disease. In this analysis, the models were developed from GFR data obtained from inulin clearance (preterm neonates to adults; n = 93) and the predictive performance of these models were evaluated in 335 subjects (preterm neonates to adults). The primary end point was the prediction of GFR from the empirical allometric models and the comparison of the predicted GFR with measured GFR. A prediction error within ±30% was considered acceptable. Overall, the predictive performance of the four models (BDE, ADE, and two maturation models) for the prediction of mean GFR was good across all age groups but the prediction of GFR in individual healthy subjects especially in neonates and infants was erratic and may be clinically unacceptable.

  13. Analysis on fuel thermal conductivity model of the computer code for performance prediction of fuel rods

    International Nuclear Information System (INIS)

    Li Hai; Huang Chen; Du Aibing; Xu Baoyu

    2014-01-01

    The thermal conductivity is one of the most important parameters in the computer code for performance prediction for fuel rods. Several fuel thermal conductivity models used in foreign computer code, including thermal conductivity models for MOX fuel and UO 2 fuel were introduced in this paper. Thermal conductivities were calculated by using these models, and the results were compared and analyzed. Finally, the thermal conductivity model for the native computer code for performance prediction for fuel rods in fast reactor was recommended. (authors)

  14. A Bayesian Performance Prediction Model for Mathematics Education: A Prototypical Approach for Effective Group Composition

    Science.gov (United States)

    Bekele, Rahel; McPherson, Maggie

    2011-01-01

    This research work presents a Bayesian Performance Prediction Model that was created in order to determine the strength of personality traits in predicting the level of mathematics performance of high school students in Addis Ababa. It is an automated tool that can be used to collect information from students for the purpose of effective group…

  15. A Prediction Model for Community Colleges Using Graduation Rate as the Performance Indicator

    Science.gov (United States)

    Moosai, Susan

    2010-01-01

    In this thesis a prediction model using graduation rate as the performance indicator is obtained for community colleges for three cohort years, 2003, 2004, and 2005 in the states of California, Florida, and Michigan. Multiple Regression analysis, using an aggregate of seven predictor variables, was employed in determining this prediction model.…

  16. Regional climate model performance and prediction of seasonal ...

    African Journals Online (AJOL)

    Knowledge about future climate provides valuable insights into how the challenges posed by climate change and variability can be addressed. ... Impacts Studies) in simulating rainfall and temperature over Uganda and also assess future impacts of climate when forced by an ensemble of two Global Climate Models (GCMs) ...

  17. The European computer model for optronic system performance prediction (ECOMOS)

    NARCIS (Netherlands)

    Kessler, S.; Bijl, P.; Labarre, L.; Repasi, E.; Wittenstein, W.; Bürsing, H.

    2017-01-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The

  18. Predicting Adaptive Performance in Multicultural Teams: A Causal Model

    Science.gov (United States)

    2008-02-01

    International Personality Item Pool – Five-Factor Model ( IPIP -FFM), http://ipip.ori.org/, were used in the present study to assess neuroticism as an... IPIP personality scale. Based on Matsumoto et al.’s (2001) results, only those items that exceeded their established criterion for factor loadings... IPIP ) were combined in a composite score representing cultural adjustment (α = .75). As described below, the factor of emotion regulation will be

  19. Algorithms and Methods for High-Performance Model Predictive Control

    DEFF Research Database (Denmark)

    Frison, Gianluca

    routines employed in the numerical tests. The main focus of this thesis is on linear MPC problems. In this thesis, both the algorithms and their implementation are equally important. About the implementation, a novel implementation strategy for the dense linear algebra routines in embedded optimization...... is proposed, aiming at improving the computational performance in case of small matrices. About the algorithms, they are built on top of the proposed linear algebra, and they are tailored to exploit the high-level structure of the MPC problems, with special care on reducing the computational complexity....

  20. Assessing the performance of prediction models: a framework for traditional and novel measures

    DEFF Research Database (Denmark)

    Steyerberg, Ewout W; Vickers, Andrew J; Cook, Nancy R

    2010-01-01

    (NRI), and integrated discrimination improvement (IDI). Moreover, decision-analytic measures have been proposed, including decision curves to plot the net benefit achieved by making decisions based on model predictions.We aimed to define the role of these relatively novel approaches in the evaluation...... be important for a prediction model. Decision-analytic measures should be reported if the predictive model is to be used for clinical decisions. Other measures of performance may be warranted in specific applications, such as reclassification metrics to gain insight into the value of adding a novel predictor...... of the performance of prediction models. For illustration, we present a case study of predicting the presence of residual tumor versus benign tissue in patients with testicular cancer (n = 544 for model development, n = 273 for external validation).We suggest that reporting discrimination and calibration will always...

  1. A human capital predictive model for agent performance in contact centres

    Directory of Open Access Journals (Sweden)

    Chris Jacobs

    2011-10-01

    Research purpose: The primary focus of this article was to develop a theoretically derived human capital predictive model for agent performance in contact centres and Business Process Outsourcing (BPO based on a review of current empirical research literature. Motivation for the study: The study was motivated by the need for a human capital predictive model that can predict agent and overall business performance. Research design: A nonempirical (theoretical research paradigm was adopted for this study and more specifically a theory or model-building approach was followed. A systematic review of published empirical research articles (for the period 2000–2009 in scholarly search portals was performed. Main findings: Eight building blocks of the human capital predictive model for agent performance in contact centres were identified. Forty-two of the human capital contact centre related articles are detailed in this study. Key empirical findings suggest that person– environment fit, job demands-resources, human resources management practices, engagement, agent well-being, agent competence; turnover intention; and agent performance are related to contact centre performance. Practical/managerial implications: The human capital predictive model serves as an operational management model that has performance implications for agents and ultimately influences the contact centre’s overall business performance. Contribution/value-add: This research can contribute to the fields of human resource management (HRM, human capital and performance management within the contact centre and BPO environment.

  2. Building Predictive Human Performance Models of Skill Acquisition in a Data Entry Task

    National Research Council Canada - National Science Library

    Fu, Wai-Tat; Gonzalez, Cleotilde; Healy, Alice F; Kole, James A; Bourne, Jr., Lyle E

    2006-01-01

    .... Since data entry is a central component in most human-machine interaction, a predictive model of performance will provide useful information that informs interface design and effectiveness of training...

  3. A predictive model of flight crew performance in automated air traffic control and flight management operations

    Science.gov (United States)

    1995-01-01

    Prepared ca. 1995. This paper describes Air-MIDAS, a model of pilot performance in interaction with varied levels of automation in flight management operations. The model was used to predict the performance of a two person flight crew responding to c...

  4. Assessing the performance of prediction models: A framework for traditional and novel measures

    NARCIS (Netherlands)

    E.W. Steyerberg (Ewout); A.J. Vickers (Andrew); N.R. Cook (Nancy); T.A. Gerds (Thomas); M. Gonen (Mithat); N. Obuchowski (Nancy); M. Pencina (Michael); M.W. Kattan (Michael)

    2010-01-01

    textabstractThe performance of prediction models can be assessed using a variety of methods and metrics. Traditional measures for binary and survival outcomes include the Brier score to indicate overall model performance, the concordance (or c) statistic for discriminative ability (or area under the

  5. Risk Prediction Models for Incident Heart Failure: A Systematic Review of Methodology and Model Performance.

    Science.gov (United States)

    Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M

    2017-09-01

    Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Standardizing the performance evaluation of short-term wind prediction models

    DEFF Research Database (Denmark)

    Madsen, Henrik; Pinson, Pierre; Kariniotakis, G.

    2005-01-01

    Short-term wind power prediction is a primary requirement for efficient large-scale integration of wind generation in power systems and electricity markets. The choice of an appropriate prediction model among the numerous available models is not trivial, and has to be based on an objective...... evaluation of model performance. This paper proposes a standardized protocol for the evaluation of short-term wind-poser preciction systems. A number of reference prediction models are also described, and their use for performance comparison is analysed. The use of the protocol is demonstrated using results...... from both on-shore and off-shore wind forms. The work was developed in the frame of the Anemos project (EU R&D project) where the protocol has been used to evaluate more than 10 prediction systems....

  7. Stata Modules for Calculating Novel Predictive Performance Indices for Logistic Models.

    Science.gov (United States)

    Barkhordari, Mahnaz; Padyab, Mojgan; Hadaegh, Farzad; Azizi, Fereidoun; Bozorgmanesh, Mohammadreza

    2016-01-01

    Prediction is a fundamental part of prevention of cardiovascular diseases (CVD). The development of prediction algorithms based on the multivariate regression models loomed several decades ago. Parallel with predictive models development, biomarker researches emerged in an impressively great scale. The key question is how best to assess and quantify the improvement in risk prediction offered by new biomarkers or more basically how to assess the performance of a risk prediction model. Discrimination, calibration, and added predictive value have been recently suggested to be used while comparing the predictive performances of the predictive models' with and without novel biomarkers. Lack of user-friendly statistical software has restricted implementation of novel model assessment methods while examining novel biomarkers. We intended, thus, to develop a user-friendly software that could be used by researchers with few programming skills. We have written a Stata command that is intended to help researchers obtain cut point-free and cut point-based net reclassification improvement index and (NRI) and relative and absolute Integrated discriminatory improvement index (IDI) for logistic-based regression analyses.We applied the commands to a real data on women participating the Tehran lipid and glucose study (TLGS) to examine if information of a family history of premature CVD, waist circumference, and fasting plasma glucose can improve predictive performance of the Framingham's "general CVD risk" algorithm. The command is addpred for logistic regression models. The Stata package provided herein can encourage the use of novel methods in examining predictive capacity of ever-emerging plethora of novel biomarkers.

  8. Computerized heat balance models to predict performance of operating nuclear power plants

    International Nuclear Information System (INIS)

    Breeding, C.L.; Carter, J.C.; Schaefer, R.C.

    1983-01-01

    The use of computerized heat balance models has greatly enhanced the decision making ability of TVA's Division of Nuclear Power. These models are utilized to predict the effects of various operating modes and to analyze changes in plant performance resulting from turbine cycle equipment modifications with greater speed and accuracy than was possible before. Computer models have been successfully used to optimize plant output by predicting the effects of abnormal condenser circulating water conditions. They were utilized to predict the degradation in performance resulting from installation of a baffle plate assembly to replace damaged low-pressure blading, thereby providing timely information allowing an optimal economic judgement as to when to replace the blading. Future use will be for routine performance test analysis. This paper presents the benefits of utility use of computerized heat balance models

  9. Interactions of Team Mental Models and Monitoring Behaviors Predict Team Performance in Simulated Anesthesia Inductions

    Science.gov (United States)

    Burtscher, Michael J.; Kolbe, Michaela; Wacker, Johannes; Manser, Tanja

    2011-01-01

    In the present study, we investigated how two team mental model properties (similarity vs. accuracy) and two forms of monitoring behavior (team vs. systems) interacted to predict team performance in anesthesia. In particular, we were interested in whether the relationship between monitoring behavior and team performance was moderated by team…

  10. Evaluation of wavelet performance via an ANN-based electrical conductivity prediction model.

    Science.gov (United States)

    Ravansalar, Masoud; Rajaee, Taher

    2015-06-01

    The prediction of water quality parameters plays an important role in water resources and environmental systems. The use of electrical conductivity (EC) as a water quality indicator is one of the important parameters for estimating the amount of mineralization. This study describes the application of artificial neural network (ANN) and wavelet-neural network hybrid (WANN) models to predict the monthly EC of the Asi River at the Demirköprü gauging station, Turkey. In the proposed hybrid WANN model, the discrete wavelet transform (DWT) was linked to the ANN model for EC prediction using a feed-forward back propagation (FFBP) training algorithm. For this purpose, the original time series of monthly EC and discharge (Q) values were decomposed to several sub-time series by DWT, and these sub-time series were then presented to the ANN model as an input dataset to predict the monthly EC. Comparing the values predicted by the models indicated that the performance of the proposed WANN model was better than the conventional ANN model. The correlation of determination (R (2)) were 0.949 and 0.381 for the WANN and ANN models, respectively. The results indicate that the peak EC values predicted by the WANN model are closer to the observed values, and this model simulates the hysteresis phenomena at an acceptable level as well.

  11. Predictive performance of DSGE model for small open economy – the case study of Czech Republic

    Directory of Open Access Journals (Sweden)

    Tomáš Jeřábek

    2013-01-01

    Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. From the point of view of political practice is appropriate to seek a model that reached a quality prediction performance for all the variables. As monitored variables were used GDP growth, inflation and interest rates. The paper focuses on performance prediction evaluation of the small open economy New Keynesian DSGE model for the Czech republic, where Bayesian method are used for their parameters estimation, against different types of Bayesian and naive random walk model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. The results indicate that the DSGE model generates estimates that are competitive with other models used in this paper.

  12. A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine

    Science.gov (United States)

    Ramakrishnan, Sridhar; Wesensten, Nancy J.; Kamimori, Gary H.; Moon, James E.; Balkin, Thomas J.; Reifman, Jaques

    2016-01-01

    Study Objectives: Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. Methods: We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). Results: The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. Conclusions: The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. Citation: Ramakrishnan S, Wesensten NJ, Kamimori GH, Moon JE, Balkin TJ, Reifman J. A unified model of performance for predicting the effects of sleep and caffeine. SLEEP 2016;39(10):1827–1841. PMID:27397562

  13. The better model to predict and improve pediatric health care quality: performance or importance-performance?

    Science.gov (United States)

    Olsen, Rebecca M; Bryant, Carol A; McDermott, Robert J; Ortinau, David

    2013-01-01

    The perpetual search for ways to improve pediatric health care quality has resulted in a multitude of assessments and strategies; however, there is little research evidence as to their conditions for maximum effectiveness. A major reason for the lack of evaluation research and successful quality improvement initiatives is the methodological challenge of measuring quality from the parent perspective. Comparison of performance-only and importance-performance models was done to determine the better predictor of pediatric health care quality and more successful method for improving the quality of care provided to children. Fourteen pediatric health care centers serving approximately 250,000 patients in 70,000 households in three West Central Florida counties were studied. A cross-sectional design was used to determine the importance and performance of 50 pediatric health care attributes and four global assessments of pediatric health care quality. Exploratory factor analysis revealed five dimensions of care (physician care, access, customer service, timeliness of services, and health care facility). Hierarchical multiple regression compared the performance-only and the importance-performance models. In-depth interviews, participant observations, and a direct cognitive structural analysis identified 50 health care attributes included in a mailed survey to parents(n = 1,030). The tailored design method guided survey development and data collection. The importance-performance multiplicative additive model was a better predictor of pediatric health care quality. Attribute importance moderates performance and quality, making the importance-performance model superior for measuring and providing a deeper understanding of pediatric health care quality and a better method for improving the quality of care provided to children. Regardless of attribute performance, if the level of attribute importance is not taken into consideration, health care organizations may spend valuable

  14. Deep Recurrent Model for Server Load and Performance Prediction in Data Center

    Directory of Open Access Journals (Sweden)

    Zheng Huang

    2017-01-01

    Full Text Available Recurrent neural network (RNN has been widely applied to many sequential tagging tasks such as natural language process (NLP and time series analysis, and it has been proved that RNN works well in those areas. In this paper, we propose using RNN with long short-term memory (LSTM units for server load and performance prediction. Classical methods for performance prediction focus on building relation between performance and time domain, which makes a lot of unrealistic hypotheses. Our model is built based on events (user requests, which is the root cause of server performance. We predict the performance of the servers using RNN-LSTM by analyzing the log of servers in data center which contains user’s access sequence. Previous work for workload prediction could not generate detailed simulated workload, which is useful in testing the working condition of servers. Our method provides a new way to reproduce user request sequence to solve this problem by using RNN-LSTM. Experiment result shows that our models get a good performance in generating load and predicting performance on the data set which has been logged in online service. We did experiments with nginx web server and mysql database server, and our methods can been easily applied to other servers in data center.

  15. Improved Fuzzy Modelling to Predict the Academic Performance of Distance Education Students

    Directory of Open Access Journals (Sweden)

    Osman Yildiz

    2013-12-01

    Full Text Available It is essential to predict distance education students’ year-end academic performance early during the course of the semester and to take precautions using such prediction-based information. This will, in particular, help enhance their academic performance and, therefore, improve the overall educational quality. The present study was on the development of a mathematical model intended to predict distance education students’ year-end academic performance using the first eight-week data on the learning management system. First, two fuzzy models were constructed, namely the classical fuzzy model and the expert fuzzy model, the latter being based on expert opinion. Afterwards, a gene-fuzzy model was developed optimizing membership functions through genetic algorithm. The data on distance education were collected through Moodle, an open source learning management system. The data were on a total of 218 students who enrolled in Basic Computer Sciences in 2012. The input data consisted of the following variables: When a student logged on to the system for the last time after the content of a lesson was uploaded, how often he/she logged on to the system, how long he/she stayed online in the last login, what score he/she got in the quiz taken in Week 4, and what score he/she got in the midterm exam taken in Week 8. A comparison was made among the predictions of the three models concerning the students’ year-end academic performance.

  16. Multi-Site Validation of the SWAT Model on the Bani Catchment: Model Performance and Predictive Uncertainty

    Directory of Open Access Journals (Sweden)

    Jamilatou Chaibou Begou

    2016-04-01

    Full Text Available The objective of this study was to assess the performance and predictive uncertainty of the Soil and Water Assessment Tool (SWAT model on the Bani River Basin, at catchment and subcatchment levels. The SWAT model was calibrated using the Generalized Likelihood Uncertainty Estimation (GLUE approach. Potential Evapotranspiration (PET and biomass were considered in the verification of model outputs accuracy. Global Sensitivity Analysis (GSA was used for identifying important model parameters. Results indicated a good performance of the global model at daily as well as monthly time steps with adequate predictive uncertainty. PET was found to be overestimated but biomass was better predicted in agricultural land and forest. Surface runoff represents the dominant process on streamflow generation in that region. Individual calibration at subcatchment scale yielded better performance than when the global parameter sets were applied. These results are very useful and provide a support to further studies on regionalization to make prediction in ungauged basins.

  17. Phenobarbital in intensive care unit pediatric population: predictive performances of population pharmacokinetic model.

    Science.gov (United States)

    Marsot, Amélie; Michel, Fabrice; Chasseloup, Estelle; Paut, Olivier; Guilhaumou, Romain; Blin, Olivier

    2017-10-01

    An external evaluation of phenobarbital population pharmacokinetic model described by Marsot et al. was performed in pediatric intensive care unit. Model evaluation is an important issue for dose adjustment. This external evaluation should allow confirming the proposed dosage adaptation and extending these recommendations to the entire intensive care pediatric population. External evaluation of phenobarbital published population pharmacokinetic model of Marsot et al. was realized in a new retrospective dataset of 35 patients hospitalized in a pediatric intensive care unit. The published population pharmacokinetic model was implemented in nonmem 7.3. Predictive performance was assessed by quantifying bias and inaccuracy of model prediction. Normalized prediction distribution errors (NPDE) and visual predictive check (VPC) were also evaluated. A total of 35 infants were studied with a mean age of 33.5 weeks (range: 12 days-16 years) and a mean weight of 12.6 kg (range: 2.7-70.0 kg). The model predicted the observed phenobarbital concentrations with a reasonable bias and inaccuracy. The median prediction error was 3.03% (95% CI: -8.52 to 58.12%), and the median absolute prediction error was 26.20% (95% CI: 13.07-75.59%). No trends in NPDE and VPC were observed. The model previously proposed by Marsot et al. in neonates hospitalized in intensive care unit was externally validated for IV infusion administration. The model-based dosing regimen was extended in all pediatric intensive care unit to optimize treatment. Due to inter- and intravariability in pharmacokinetic model, this dosing regimen should be combined with therapeutic drug monitoring. © 2017 Société Française de Pharmacologie et de Thérapeutique.

  18. Development of Simple Drying Model for Performance Prediction of Solar Dryer: Theoretical Analysis

    DEFF Research Database (Denmark)

    Singh, Shobhana; Kumar, Subodh

    2012-01-01

    of experimental drying parameters. A laboratory model of mixed-mode solar dryer system is tested with cylindrical potato samples of thickness 5 and 18 mm under simulated indoor conditions. The potato samples were dried at a constant absorbed thermal energy of 750 W/m2 and air mass flow rate of 0.011 kg......An analytical moisture diffusion model which considers the influence of external resistance to mass transfer is developed to predict thermal performance of dryer system. The moisture diffusion coefficient, Deff that is necessary to evaluate the prediction model has been determined in terms....../sec. The proposed model with computed moisture diffusion coefficient, Deff has been utilized to predict dimensionless moisture content, φ for each test condition of a given dryer design. In order to validate the model, statistical test methods such as mean absolute error (MAE), root mean square error (RSME...

  19. Performance of ANFIS versus MLP-NN dissolved oxygen prediction models in water quality monitoring.

    Science.gov (United States)

    Najah, A; El-Shafie, A; Karim, O A; El-Shafie, Amr H

    2014-02-01

    We discuss the accuracy and performance of the adaptive neuro-fuzzy inference system (ANFIS) in training and prediction of dissolved oxygen (DO) concentrations. The model was used to analyze historical data generated through continuous monitoring of water quality parameters at several stations on the Johor River to predict DO concentrations. Four water quality parameters were selected for ANFIS modeling, including temperature, pH, nitrate (NO3) concentration, and ammoniacal nitrogen concentration (NH3-NL). Sensitivity analysis was performed to evaluate the effects of the input parameters. The inputs with the greatest effect were those related to oxygen content (NO3) or oxygen demand (NH3-NL). Temperature was the parameter with the least effect, whereas pH provided the lowest contribution to the proposed model. To evaluate the performance of the model, three statistical indices were used: the coefficient of determination (R (2)), the mean absolute prediction error, and the correlation coefficient. The performance of the ANFIS model was compared with an artificial neural network model. The ANFIS model was capable of providing greater accuracy, particularly in the case of extreme events.

  20. Dynamic Model of Centrifugal Compressor for Prediction of Surge Evolution and Performance Variations

    International Nuclear Information System (INIS)

    Jung, Mooncheong; Han, Jaeyoung; Yu, Sangseok

    2016-01-01

    When a control algorithm is developed to protect automotive compressor surges, the simulation model typically selects an empirically determined look-up table. However, it is difficult for a control oriented empirical model to show surge characteristics of the super charger. In this study, a dynamic supercharger model is developed to predict the performance of a centrifugal compressor under dynamic load follow-up. The model is developed using Simulink® environment, and is composed of a compressor, throttle body, valves, and chamber. Greitzer’s compressor model is used, and the geometric parameters are achieved by the actual supercharger. The simulation model is validated with experimental data. It is shown that compressor surge is effectively predicted by this dynamic compressor model under various operating conditions.

  1. A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine.

    Science.gov (United States)

    Ramakrishnan, Sridhar; Wesensten, Nancy J; Kamimori, Gary H; Moon, James E; Balkin, Thomas J; Reifman, Jaques

    2016-10-01

    Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% sleep loss conditions than for more severe cases. Overall, accounting for the effects of caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. © 2016 Associated Professional Sleep Societies, LLC.

  2. Review and evaluation of performance measures for survival prediction models in external validation settings.

    Science.gov (United States)

    Rahman, M Shafiqur; Ambler, Gareth; Choodari-Oskooei, Babak; Omar, Rumana Z

    2017-04-18

    When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell's concordance measure which tended to increase as censoring increased. We recommend that Uno's concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller's measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston's D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive accuracy curves. In addition, we recommend to investigate the characteristics

  3. A prediction model to identify hospitalised, older adults with reduced physical performance

    DEFF Research Database (Denmark)

    Hansen Bruun, Inge; Maribo, Thomas; Nørgaard, Birgitte

    2017-01-01

    , but 76 patients (65%) had persistent reduced physical performance when compared to their baseline (30s-CST ≤ 8). The number of potential predictors was reduced in order to create a simplified prediction model based on 4 variables, namely the use of a walking aid before hospitalisation (score = 1.5), a 30...

  4. Predictive Modeling of Student Performances for Retention and Academic Support in a Diagnostic Medical Sonography Program

    Science.gov (United States)

    Borghese, Peter; Lacey, Sandi

    2014-01-01

    As part of a retention and academic support program, data was collected to develop a predictive model of student performances in core classes in a Diagnostic Medical Sonography (DMS) program. The research goal was to identify students likely to have difficulty with coursework and provide supplemental tutorial support. The focus was on the…

  5. Re-parametrization of a swine model to predict growth performance of broilers

    OpenAIRE

    Dukhta, G.; van Milgen, Jacob; Kövér, G.; Halas, V.

    2017-01-01

    The aim of the study was to investigate whether a pig growth model is suitable to be modified and adapted for broilers. As monogastric animals, pigs and poultry share many similarities in their digestion and metabolism, many structures (body protein and lipid stores) and the nutrient flows of the underlying metabolic pathways are similar among species. For that purpose, the InraPorc model was used as a basis to predict growth performance and body composition at slaughter in broilers. First...

  6. Prediction of Cognitive Performance and Subjective Sleepiness Using a Model of Arousal Dynamics.

    Science.gov (United States)

    Postnova, Svetlana; Lockley, Steven W; Robinson, Peter A

    2018-04-01

    A model of arousal dynamics is applied to predict objective performance and subjective sleepiness measures, including lapses and reaction time on a visual Performance Vigilance Test (vPVT), performance on a mathematical addition task (ADD), and the Karolinska Sleepiness Scale (KSS). The arousal dynamics model is comprised of a physiologically based flip-flop switch between the wake- and sleep-active neuronal populations and a dynamic circadian oscillator, thus allowing prediction of sleep propensity. Published group-level experimental constant routine (CR) and forced desynchrony (FD) data are used to calibrate the model to predict performance and sleepiness. Only the studies using dim light (performance measures during CR and FD protocols, with sleep-wake cycles ranging from 20 to 42.85 h and a 2:1 wake-to-sleep ratio. New metrics relating model outputs to performance and sleepiness data are developed and tested against group average outcomes from 7 (vPVT lapses), 5 (ADD), and 8 (KSS) experimental protocols, showing good quantitative and qualitative agreement with the data (root mean squared error of 0.38, 0.19, and 0.35, respectively). The weights of the homeostatic and circadian effects are found to be different between the measures, with KSS having stronger homeostatic influence compared with the objective measures of performance. Using FD data in addition to CR data allows us to challenge the model in conditions of both acute sleep deprivation and structured circadian misalignment, ensuring that the role of the circadian and homeostatic drives in performance is properly captured.

  7. Mortality prediction models for pediatric intensive care: comparison of overall and subgroup specific performance.

    Science.gov (United States)

    Visser, Idse H E; Hazelzet, Jan A; Albers, Marcel J I J; Verlaat, Carin W M; Hogenbirk, Karin; van Woensel, Job B; van Heerde, Marc; van Waardenburg, Dick A; Jansen, Nicolaas J G; Steyerberg, Ewout W

    2013-05-01

    To validate paediatric index of mortality (PIM) and pediatric risk of mortality (PRISM) models within the overall population as well as in specific subgroups in pediatric intensive care units (PICUs). Variants of PIM and PRISM prediction models were compared with respect to calibration (agreement between predicted risks and observed mortality) and discrimination (area under the receiver operating characteristic curve, AUC). We considered performance in the overall study population and in subgroups, defined by diagnoses, age and urgency at admission, and length of stay (LoS) at the PICU. We analyzed data from consecutive patients younger than 16 years admitted to the eight PICUs in the Netherlands between February 2006 and October 2009. Patients referred to another ICU or deceased within 2 h after admission were excluded. A total of 12,040 admissions were included, with 412 deaths. Variants of PIM2 were best calibrated. All models discriminated well, also in patients predicted accurately in most (12 out of 14) categories. Discrimination was poorer for all models (AUC 6 days at the PICU. All models discriminated well, also in most subgroups including neonates, but had difficulties predicting mortality for patients >6 days at the PICU. In a western European setting both the PIM2(-ANZ06) or a recalibrated version of PRISM3-24 are suited for overall individualized risk prediction.

  8. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    constructed from geological and hydrological data. However, geophysical data are increasingly used to inform hydrogeologic models because they are collected at lower cost and much higher density than geological and hydrological data. Despite increased use of geophysics, it is still unclear whether......, ‘true’, hydrogeological and geophysical systems. The two types of ‘true’ systems can be used together with corresponding forward codes to generate hydrological and geophysical datasets, respectively. These synthetic datasets can be interpreted using any hydrogeophysical inversion scheme...

  9. 10 km running performance predicted by a multiple linear regression model with allometrically adjusted variables.

    Science.gov (United States)

    Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O

    2016-06-01

    The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.

  10. Performance Prediction Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    2017-09-25

    The Performance Prediction Toolkit (PPT), is a scalable co-design tool that contains the hardware and middle-ware models, which accept proxy applications as input in runtime prediction. PPT relies on Simian, a parallel discrete event simulation engine in Python or Lua, that uses the process concept, where each computing unit (host, node, core) is a Simian entity. Processes perform their task through message exchanges to remain active, sleep, wake-up, begin and end. The PPT hardware model of a compute core (such as a Haswell core) consists of a set of parameters, such as clock speed, memory hierarchy levels, their respective sizes, cache-lines, access times for different cache levels, average cycle counts of ALU operations, etc. These parameters are ideally read off a spec sheet or are learned using regression models learned from hardware counters (PAPI) data. The compute core model offers an API to the software model, a function called time_compute(), which takes as input a tasklist. A tasklist is an unordered set of ALU, and other CPU-type operations (in particular virtual memory loads and stores). The PPT application model mimics the loop structure of the application and replaces the computational kernels with a call to the hardware model's time_compute() function giving tasklists as input that model the compute kernel. A PPT application model thus consists of tasklists representing kernels and the high-er level loop structure that we like to think of as pseudo code. The key challenge for the hardware model's time_compute-function is to translate virtual memory accesses into actual cache hierarchy level hits and misses.PPT also contains another CPU core level hardware model, Analytical Memory Model (AMM). The AMM solves this challenge soundly, where our previous alternatives explicitly include the L1,L2,L3 hit-rates as inputs to the tasklists. Explicit hit-rates inevitably only reflect the application modeler's best guess, perhaps informed by a few

  11. Relative performance of different numerical weather prediction models for short term predition of wind wnergy

    Energy Technology Data Exchange (ETDEWEB)

    Giebel, G.; Landberg, L. [Risoe National Lab., Wind Energy and Atmospheric Physics Dept., Roskilde (Denmark); Moennich, K.; Waldl, H.P. [Carl con Ossietzky Univ., Faculty of Physics, Dept. of Energy and Semiconductor, Oldenburg (Germany)

    1999-03-01

    In several approaches presented in other papers in this conference, short term forecasting of wind power for a time horizon covering the next two days is done on the basis of Numerical Weather Prediction (NWP) models. This paper explores the relative merits of HIRLAM, which is the model used by the Danish Meteorological Institute, the Deutschlandmodell from the German Weather Service and the Nested Grid Model used in the US. The performance comparison will be mainly done for a site in Germany which is in the forecasting area of both the Deutschlandmodell and HIRLAM. In addition, a comparison of measured data with the forecasts made for one site in Iowa will be included, which allows conclusions on the merits of all three models. Differences in the relative performances could be due to a better tailoring of one model to its country, or to a tighter grid, or could be a function of the distance between the grid points and the measuring site. Also the amount, in which the performance can be enhanced by the use of model output statistics (topic of other papers in this conference) could give insights into the performance of the models. (au)

  12. Assess and Predict Automatic Generation Control Performances for Thermal Power Generation Units Based on Modeling Techniques

    Science.gov (United States)

    Zhao, Yan; Yang, Zijiang; Gao, Song; Liu, Jinbiao

    2018-02-01

    Automatic generation control(AGC) is a key technology to maintain real time power generation and load balance, and to ensure the quality of power supply. Power grids require each power generation unit to have a satisfactory AGC performance, being specified in two detailed rules. The two rules provide a set of indices to measure the AGC performance of power generation unit. However, the commonly-used method to calculate these indices is based on particular data samples from AGC responses and will lead to incorrect results in practice. This paper proposes a new method to estimate the AGC performance indices via system identification techniques. In addition, a nonlinear regression model between performance indices and load command is built in order to predict the AGC performance indices. The effectiveness of the proposed method is validated through industrial case studies.

  13. A new model for predicting performance of fin-and-tube heat exchanger under frost condition

    Energy Technology Data Exchange (ETDEWEB)

    Cui, J. [Key Lab. of Ocean Energy Utilization and Energy Conservation of Ministry of Education, Dalian University of Technology, Dalian 116024 (China); Li, W.Z., E-mail: wzhongli@dlut.edu.c [Key Lab. of Ocean Energy Utilization and Energy Conservation of Ministry of Education, Dalian University of Technology, Dalian 116024 (China); Liu, Y.; Zhao, Y.S. [Key Lab. of Ocean Energy Utilization and Energy Conservation of Ministry of Education, Dalian University of Technology, Dalian 116024 (China)

    2011-02-15

    Accurate prediction of frost characteristics has crucial influence on designing effective heat exchangers. In this paper, a new CFD (Computational Fluid Dynamics) model has been proposed to predict the frost behaviour. The initial period of frost formation can be predicted and the influence of surface structure can be considered. The numerical simulations have been carried out to investigate the performance of fin-and-tube heat exchanger under frost condition. The results have been validated by comparison of simulations with the data computed by empirical formulas. The transient local frost formation has been obtained. The average frost thickness, heat exchanger coefficient and pressure drop on air side has been analysed as well. In addition, the influence factors have also been discussed, such as fin pitch, relative humidity, air flow rate and evaporating temperature of refrigerant.

  14. Performance Prediction Modelling for Flexible Pavement on Low Volume Roads Using Multiple Linear Regression Analysis

    Directory of Open Access Journals (Sweden)

    C. Makendran

    2015-01-01

    Full Text Available Prediction models for low volume village roads in India are developed to evaluate the progression of different types of distress such as roughness, cracking, and potholes. Even though the Government of India is investing huge quantum of money on road construction every year, poor control over the quality of road construction and its subsequent maintenance is leading to the faster road deterioration. In this regard, it is essential that scientific maintenance procedures are to be evolved on the basis of performance of low volume flexible pavements. Considering the above, an attempt has been made in this research endeavor to develop prediction models to understand the progression of roughness, cracking, and potholes in flexible pavements exposed to least or nil routine maintenance. Distress data were collected from the low volume rural roads covering about 173 stretches spread across Tamil Nadu state in India. Based on the above collected data, distress prediction models have been developed using multiple linear regression analysis. Further, the models have been validated using independent field data. It can be concluded that the models developed in this study can serve as useful tools for the practicing engineers maintaining flexible pavements on low volume roads.

  15. Predictive modeling of performance of a helium charged Stirling engine using an artificial neural network

    International Nuclear Information System (INIS)

    Özgören, Yaşar Önder; Çetinkaya, Selim; Sarıdemir, Suat; Çiçek, Adem; Kara, Fuat

    2013-01-01

    Highlights: ► Max torque and power values were obtained at 3.5 bar Pch, 1273 K Hst and 1.4:1 r. ► According to ANOVA, the most influential parameter on power was Hst with 48.75%. ► According to ANOVA, the most influential parameter on torque was Hst with 41.78%. ► ANN (R 2 = 99.8% for T, P) was superior to regression method (R 2 = 92% for T, 81% for P). ► LM was the best learning algorithm in predicting both power and torque. - Abstract: In this study, an artificial neural network (ANN) model was developed to predict the torque and power of a beta-type Stirling engine using helium as the working fluid. The best results were obtained by 5-11-7-1 and 5-13-7-1 network architectures, with double hidden layers for the torque and power respectively. For these network architectures, the Levenberg–Marquardt (LM) learning algorithm was used. Engine performance values predicted with the developed ANN model were compared with the actual performance values measured experimentally, and substantially coinciding results were observed. After ANN training, correlation coefficients (R 2 ) of both engine performance values for testing and training data were very close to 1. Similarly, root-mean-square error (RMSE) and mean error percentage (MEP) values for the testing and training data were less than 0.02% and 3.5% respectively. These results showed that the ANN is an acceptable model for prediction of the torque and power of the beta-type Stirling engine

  16. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  17. Analytical predictions for the performance of a reinforced concrete containment model subject to overpressurization

    International Nuclear Information System (INIS)

    Weatherby, J.R.; Clauss, D.B.

    1987-01-01

    Under the sponsorship of the US Nuclear Regulatory Commission, Sandia National Laboratories is investigating methods for predicting the structural performance of nuclear reactor containment buildings under hypothesized severe accident conditions. As part of this program, a 1/6th-scale reinforced concrete containment model will be pressurized to failure in early 1987. Data generated by the test will be compared to analytical predictions of the structural response in order to assess the accuracy and reliability of the analytical techniques. As part of the pretest analysis effort, Sandia has conducted a number of analyses of the containment structure using the ABAQUS general purpose finite element code. This paper describes results from a nonlinear axisymmetric shell analysis as well as the material models and failure criteria used in conjunction with the analysis

  18. Evaluation of a Nutrition Model in Predicting Performance of Vietnamese Cattle

    Directory of Open Access Journals (Sweden)

    David Parsons

    2012-09-01

    Full Text Available The objective of this study was to evaluate the predictions of dry matter intake (DM and average daily gain (ADG of Vietnamese Yellow (Vang purebred and crossbred (Vang with Red Sindhi or Brahman bulls fed under Vietnamese conditions using two levels of solution (1 and 2 of the large ruminant nutrition system (LRNS model. Animal information and feed chemical characterization were obtained from five studies. The initial mean body weight (BW of the animals was 186, with standard deviation ±33.2 kg. Animals were fed ad libitum commonly available feedstuffs, including cassava powder, corn grain, Napier grass, rice straw and bran, and minerals and vitamins, for 50 to 80 d. Adequacy of the predictions was assessed with the Model Evaluation System using the root of mean square error of prediction (RMSEP, accuracy (Cb, coefficient of determination (r2, and mean bias (MB. When all treatment means were used, both levels of solution predicted DMI similarly with low precision (r2 of 0.389 and 0.45 for level 1 and 2, respectively and medium accuracy (Cb of 0.827 and 0.859, respectively. The LRNS clearly over-predicted the intake of one study. When this study was removed from the comparison, the precision and accuracy considerably increased for the level 1 solution. Metabolisable protein was limiting ADG for more than 68% of the treatment averages. Both levels differed regarding precision and accuracy. While level 1 solution had the least MB compared with level 2 (0.058 and 0.159 kg/d, respectively, the precision was greater for level 2 than level 1 (0.89 and 0.70, respectively. The accuracy (Cb was similar between level 1 and level 2 (p = 0.8997; 0.977 and 0.871, respectively. The RMSEP indicated that both levels were on average under- or over-predicted by about 190 g/d, suggesting that even though the accuracy (Cb was greater for level 1 compared to level 2, both levels are likely to wrongly predict ADG by the same amount. Our analyses indicated that the

  19. A Study of Performance in Low-Power Tokamak Reactor with Integrated Predictive Modeling Code

    International Nuclear Information System (INIS)

    Pianroj, Y.; Onjun, T.; Suwanna, S.; Picha, R.; Poolyarat, N.

    2009-07-01

    Full text: A fusion hybrid or a small fusion power output with low power tokamak reactor is presented as another useful application of nuclear fusion. Such tokamak can be used for fuel breeding, high-level waste transmutation, hydrogen production at high temperature, and testing of nuclear fusion technology components. In this work, an investigation of the plasma performance in a small fusion power output design is carried out using the BALDUR predictive integrated modeling code. The simulations of the plasma performance in this design are carried out using the empirical-based Mixed Bohm/gyro Bohm (B/gB) model, whereas the pedestal temperature model is based on magnetic and flow shear (δ α ρ ζ 2 ) stabilization pedestal width scaling. The preliminary results using this core transport model show that the central ion and electron temperatures are rather pessimistic. To improve the performance, the optimization approach are carried out by varying some parameters, such as plasma current and power auxiliary heating, which results in some improvement of plasma performance

  20. REVIEW OF MECHANISTIC UNDERSTANDING AND MODELING AND UNCERTAINTY ANALYSIS METHODS FOR PREDICTING CEMENTITIOUS BARRIER PERFORMANCE

    Energy Technology Data Exchange (ETDEWEB)

    Langton, C.; Kosson, D.

    2009-11-30

    Cementitious barriers for nuclear applications are one of the primary controls for preventing or limiting radionuclide release into the environment. At the present time, performance and risk assessments do not fully incorporate the effectiveness of engineered barriers because the processes that influence performance are coupled and complicated. Better understanding the behavior of cementitious barriers is necessary to evaluate and improve the design of materials and structures used for radioactive waste containment, life extension of current nuclear facilities, and design of future nuclear facilities, including those needed for nuclear fuel storage and processing, nuclear power production and waste management. The focus of the Cementitious Barriers Partnership (CBP) literature review is to document the current level of knowledge with respect to: (1) mechanisms and processes that directly influence the performance of cementitious materials (2) methodologies for modeling the performance of these mechanisms and processes and (3) approaches to addressing and quantifying uncertainties associated with performance predictions. This will serve as an important reference document for the professional community responsible for the design and performance assessment of cementitious materials in nuclear applications. This review also provides a multi-disciplinary foundation for identification, research, development and demonstration of improvements in conceptual understanding, measurements and performance modeling that would be lead to significant reductions in the uncertainties and improved confidence in the estimating the long-term performance of cementitious materials in nuclear applications. This report identifies: (1) technology gaps that may be filled by the CBP project and also (2) information and computational methods that are in currently being applied in related fields but have not yet been incorporated into performance assessments of cementitious barriers. The various

  1. South African seasonal rainfall prediction performance by a coupled ocean-atmosphere model

    CSIR Research Space (South Africa)

    Landman, WA

    2010-12-01

    Full Text Available Evidence is presented that coupled ocean-atmosphere models can already outscore computationally less expensive atmospheric models. However, if the atmospheric models are forced with highly skillful SST predictions, they may still be a very strong...

  2. A Family of High-Performance Solvers for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Frison, Gianluca; Sokoler, Leo Emil; Jørgensen, John Bagterp

    2014-01-01

    In Model Predictive Control (MPC), an optimization problem has to be solved at each sampling time, and this has traditionally limited the use of MPC to systems with slow dynamic. In this paper, we propose an e_cient solution strategy for the unconstrained sub-problems that give the search......, and techniques such as inexact search direction and mixed precision computation. Finally, we test our HPMPC toolbox, a family of high-performance solvers tailored for MPC and implemented using these techniques, that is shown to be several times faster than current state-of-the-art solvers for linear MPC....

  3. Predicting tissue outcome from acute stroke magnetic resonance imaging: improving model performance by optimal sampling of training data.

    Science.gov (United States)

    Jonsdottir, Kristjana Yr; Østergaard, Leif; Mouridsen, Kim

    2009-09-01

    It has been hypothesized that algorithms predicting the final outcome in acute ischemic stroke may provide future tools for identifying salvageable tissue and hence guide individualized therapy. We developed means of quantifying predictive model performance to identify model training strategies that optimize performance and reduce bias in predicted lesion volumes. We optimized predictive performance based on the area under the receiver operating curve for logistic regression and used simulated data to illustrate the effect of an unbalanced (unequal number of infarcting and surviving voxels) training set on predicted infarct risk. We then tested the performance and optimality of models based on perfusion-weighted, diffusion-weighted, and structural MRI modalities by changing the proportion of mismatch voxels in balanced training material. Predictive performance (area under the receiver operating curve) based on all brain voxels is excessively optimistic and lacks sensitivity in performance in mismatch tissue. The ratio of infarcting and noninfarcting voxels used for training predictive algorithms significantly biases tissue infarct risk estimates. Optimal training strategy is obtained using a balanced training set. We show that 60% of noninfarcted voxels consists of mismatch voxels in an optimal balanced training set for the patient data presented. An equal number of infarcting and noninfarcting voxels should be used when training predictive models. The choice of test and training sets critically affects predictive model performance and should be closely evaluated before comparisons across patient cohorts.

  4. Rotary engine performance limits predicted by a zero-dimensional model

    Science.gov (United States)

    Bartrand, Timothy A.; Willis, Edward A.

    1992-01-01

    A parametric study was performed to determine the performance limits of a rotary combustion engine. This study shows how well increasing the combustion rate, insulating, and turbocharging increase brake power and decrease fuel consumption. Several generalizations can be made from the findings. First, it was shown that the fastest combustion rate is not necessarily the best combustion rate. Second, several engine insulation schemes were employed for a turbocharged engine. Performance improved only for a highly insulated engine. Finally, the variability of turbocompounding and the influence of exhaust port shape were calculated. Rotary engines performance was predicted by an improved zero-dimensional computer model based on a model developed at the Massachusetts Institute of Technology in the 1980's. Independent variables in the study include turbocharging, manifold pressures, wall thermal properties, leakage area, and exhaust port geometry. Additions to the computer programs since its results were last published include turbocharging, manifold modeling, and improved friction power loss calculation. The baseline engine for this study is a single rotor 650 cc direct-injection stratified-charge engine with aluminum housings and a stainless steel rotor. Engine maps are provided for the baseline and turbocharged versions of the engine.

  5. Artificial neural network model for prediction of safety performance indicators goals in nuclear plants

    International Nuclear Information System (INIS)

    Souto, Kelling C.; Nunes, Wallace W.; Machado, Marcelo D.

    2011-01-01

    Safety performance indicators have been developed to provide a quantitative indication of the performance and safety in various industry sectors. These indexes can provide assess to aspects ranging from production, design, and human performance up to management issues in accordance with policy, objectives and goals of the company. The use of safety performance indicators in nuclear power plants around the world is a reality. However, it is necessary to periodically set goal values. Such goals are targets relating to each of the indicators to be achieved by the plant over a predetermined period of operation. The current process of defining these goals is carried out by experts in a subjective way, based on actual data from the plant, and comparison with global indices. Artificial neural networks are computational techniques that present a mathematical model inspired by the neural structure of intelligent organisms that acquire knowledge through experience. This paper proposes an artificial neural network model aimed at predicting values of goals to be used in the evaluation of safety performance indicators for nuclear power plants. (author)

  6. Correlation of Amine Swingbed On-Orbit CO2 Performance with a Hardware Independent Predictive Model

    Science.gov (United States)

    Papale, William; Sweterlitsch, Jeffery

    2015-01-01

    The Amine Swingbed Payload is an experimental system deployed on the International Space Station (ISS) that includes a two-bed, vacuum regenerated, amine-based carbon dioxide (CO2) removal subsystem as the principal item under investigation. The aminebased subsystem, also described previously in various publications as CAMRAS 3, was originally designed, fabricated and tested by Hamilton Sundstrand Space Systems International, Inc. (HSSSI) and delivered to NASA in November 2008. The CAMRAS 3 unit was subsequently designed into a flight payload experiment in 2010 and 2011, with flight test integration activities accomplished on-orbit between January 2012 and March 2013. Payload activation was accomplished in May 2013 followed by a 1000 hour experimental period. The experimental nature of the Payload and the interaction with the dynamic ISS environment present unique scientific and engineering challenges, in particular to the verification and validation of the expected Payload CO2 removal performance. A modeling and simulation approach that incorporates principles of chemical reaction engineering has been developed for the amine-based system to predict the dynamic cabin CO2 partial pressure with given inputs of sorbent bed size, process air flow, operating temperature, half-cycle time, CO2 generation rate, cabin volume and the magnitude of vacuum available. Simulation runs using the model to predict ambient CO2 concentrations show good correlation to on-orbit performance measurements and ISS dynamic concentrations for the assumed operating conditions. The dynamic predictive modelling could benefit operational planning to help ensure ISS CO2 concentrations are maintained below prescribed limits and for the Orion vehicle to simulate various operating conditions, scenarios and transients.

  7. A Model for Predicting Student Performance on High-Stakes Assessment

    Science.gov (United States)

    Dammann, Matthew Walter

    2010-01-01

    This research study examined the use of student achievement on reading and math state assessments to predict success on the science state assessment. Multiple regression analysis was utilized to test the prediction for all students in grades 5 and 8 in a mid-Atlantic state. The prediction model developed from the analysis explored the combined…

  8. Improved Fuzzy Modelling to Predict the Academic Performance of Distance Education Students

    Science.gov (United States)

    Yildiz, Osman; Bal, Abdullah; Gulsecen, Sevinc

    2013-01-01

    It is essential to predict distance education students' year-end academic performance early during the course of the semester and to take precautions using such prediction-based information. This will, in particular, help enhance their academic performance and, therefore, improve the overall educational quality. The present study was on the…

  9. High-performance small-scale solvers for linear Model Predictive Control

    DEFF Research Database (Denmark)

    Frison, Gianluca; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd

    2014-01-01

    , with the two main research areas of explicit MPC and tailored on-line MPC. State-of-the-art solvers in this second class can outperform optimized linear-algebra libraries (BLAS) only for very small problems, and do not explicitly exploit the hardware capabilities, relying on compilers for that. This approach......In Model Predictive Control (MPC), an optimization problem needs to be solved at each sampling time, and this has traditionally limited use of MPC to systems with slow dynamic. In recent years, there has been an increasing interest in the area of fast small-scale solvers for linear MPC...... can attain only a small fraction of the peak performance on modern processors. In our paper, we combine high-performance computing techniques with tailored solvers for MPC, and use the specific instruction sets of the target architectures. The resulting software (called HPMPC) can solve linear MPC...

  10. Experimental quadrotor flight performance using computationally efficient and recursively feasible linear model predictive control

    Science.gov (United States)

    Jaffery, Mujtaba H.; Shead, Leo; Forshaw, Jason L.; Lappas, Vaios J.

    2013-12-01

    A new linear model predictive control (MPC) algorithm in a state-space framework is presented based on the fusion of two past MPC control laws: steady-state optimal MPC (SSOMPC) and Laguerre optimal MPC (LOMPC). The new controller, SSLOMPC, is demonstrated to have improved feasibility, tracking performance and computation time than its predecessors. This is verified in both simulation and practical experimentation on a quadrotor unmanned air vehicle in an indoor motion-capture testbed. The performance of the control law is experimentally compared with proportional-integral-derivative (PID) and linear quadratic regulator (LQR) controllers in an unconstrained square manoeuvre. The use of soft control output and hard control input constraints is also examined in single and dual constrained manoeuvres.

  11. Performance Assessment of the VSC Using Two Model Predictive Control Schemes

    DEFF Research Database (Denmark)

    Al hasheem, Mohamed; Abdelhakim, Ahmed; Dragicevic, Tomislav

    2018-01-01

    Finite control set model predictive control (FCS-MPC) methods in different power electronics application are gaining high attention due to their simplicity and fast dynamics. This paper introduces an experimental assessment of the two-level three-phase voltage source converter (2L-VSC) using two...... FCS-MPC algorithms. In order to perform such comparative evaluation, the 2L-VSC efficiency and total harmonics distortion voltage (THDv) have been measured where considering a linear load and non-linear load. The new algorithm gives better results than the conventional algorithm in terms of the THD...... and 2L-VSC efficiency. The results also demonstrate the performance of the system using carrier based pulse width modulation (CB-PWM). These findings have been validated for both linear and non-linear loads through experimental verification on 4 kW 2L-VSC prototype. It can be concluded that a comparable...

  12. Design and off-design thermodynamic model of a gas turbine for performance prediction

    Energy Technology Data Exchange (ETDEWEB)

    Monteiro, Ulisses A. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Lab. de Ensaios de Modelos de Engenharia (LEME)]. E-mail: ulisses@peno.coppe.ufrj.br; Belchior, Carlos Rodrigues Pereira [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE). Lab. de Maquinas Termicas (LMT)]. E-mail: belchior@peno.coppe.ufrj.br

    2008-07-01

    There are some types of faults that do not leave 'signatures' in the vibration spectrum of a gas turbine. These faults can only be detected by other analysis techniques. One of these techniques is the gas turbine performance analysis or gas path analysis which relates the efficiency, mass flow, temperature, pressure, fuel consumption and power to the gas turbine faults. In this paper the methodology used in the development of a thermodynamic model that simulates the design and off-design operation of a gas turbine with a free power turbine will be presented. The results obtained are used to predict the gas turbine performance in both design and off-design operation point, and also to simulate some types of faults. (author)

  13. Simple-II: A new numerical thermal model for predicting thermal performance of Stirling engines

    International Nuclear Information System (INIS)

    Babaelahi, Mojtaba; Sayyaadi, Hoseyn

    2014-01-01

    A new thermal model called Simple-II was presented based on modification of the original Simple analysis. First, the engine was modeled considering adiabatic expansion and compression spaces, in which effect of gas leakage from cylinder to buffer space and shuttle effect of displacer were implemented in the basic differential equations. Moreover, non-ideal thermal operation of the regenerator and the longitudinal heat conduction between heater and cooler through the regenerator wall were considered. Based on the magnitudes of pressure drops in heat exchangers, values of pressure in the expansion and compression spaces were corrected. Furthermore, based on the theory of finite speed thermodynamics (FST), the corresponding power loss due to the piston motion and also the mechanical friction were considered. Simple-II was employed for thermal simulation of a prototype Stirling engine. Finally, result of the new model was evaluated by comprehensive comparison of experimental results with those of the previous models. The output power and thermal efficiency were predicted with +20.7% and +7.1% errors, respectively. Also, the regenerator was demonstrated to be the main source of power and heat losses; nevertheless, other loss mechanisms have reasonable effects on output power and/or thermal efficiency of Stirling engines. - Highlights: • A new thermal model was presented based on various loss mechanisms. • Shuttle effect and mass leakage were integrated into differential equations. • FST, mechanical friction and longitudinal conduction losses were considered. • A methodology was presented for numerical solution and correcting results based on losses. • The new model predicted thermal performance of engine with higher accuracy

  14. Use of Neural Networks for modeling and predicting boiler's operating performance

    International Nuclear Information System (INIS)

    Kljajić, Miroslav; Gvozdenac, Dušan; Vukmirović, Srdjan

    2012-01-01

    The need for high boiler operating performance requires the application of improved techniques for the rational use of energy. The analysis presented is guided by an effort to find possibilities for ways energy resources can be used wisely to secure a more efficient final energy supply. However, the biggest challenges are related to the variety and stochastic nature of influencing factors. The paper presents a method for modeling, assessing, and predicting the efficiency of boilers based on measured operating performance. The method utilizes a neural network approach to analyze and predict boiler efficiency and also to discover possibilities for enhancing efficiency. The analysis is based on energy surveys of 65 randomly selected boilers located at over 50 sites in the northern province of Serbia. These surveys included a representative range of industrial, public and commercial users of steam and hot water. The sample covered approximately 25% of all boilers in the province and yielded reliable and relevant results. By creating a database combined with soft computing assistance a wide range of possibilities are created for identifying and assessing factors of influence and making a critical evaluation of practices used on the supply side as a source of identified inefficiency. -- Highlights: ► We develop the model for assessing and predicting efficiency of boilers. ► The method implies the use of Artificial Neural Network approach for analysis. ► The results obtained correspond well to collected and measured data. ► Findings confirm and present good abilities of preventive or proactive approach. ► Analysis reveals and specifies opportunities for increasing efficiency of boilers.

  15. EPRI MOV performance prediction program

    International Nuclear Information System (INIS)

    Hosler, J.F.; Damerell, P.S.; Eidson, M.G.; Estep, N.E.

    1994-01-01

    An overview of the EPRI Motor-Operated Valve (MOV) Performance Prediction Program is presented. The objectives of this Program are to better understand the factors affecting the performance of MOVs and to develop and validate methodologies to predict MOV performance. The Program involves valve analytical modeling, separate-effects testing to refine the models, and flow-loop and in-plant MOV testing to provide a basis for model validation. The ultimate product of the Program is an MOV Performance Prediction Methodology applicable to common gate, globe, and butterfly valves. The methodology predicts thrust and torque requirements at design-basis flow and differential pressure conditions, assesses the potential for gate valve internal damage, and provides test methods to quantify potential for gate valve internal damage, and provides test methods to quantify potential variations in actuator output thrust with loading condition. Key findings and their potential impact on MOV design and engineering application are summarized

  16. Progress in sensor performance testing, modeling and range prediction using the TOD method: an overview

    Science.gov (United States)

    Bijl, Piet; Hogervorst, Maarten A.; Toet, Alexander

    2017-05-01

    The Triangle Orientation Discrimination (TOD) methodology includes i) a widely applicable, accurate end-to-end EO/IR sensor test, ii) an image-based sensor system model and iii) a Target Acquisition (TA) range model. The method has been extensively validated against TA field performance for a wide variety of well- and under-sampled imagers, systems with advanced image processing techniques such as dynamic super resolution and local adaptive contrast enhancement, and sensors showing smear or noise drift, for both static and dynamic test stimuli and as a function of target contrast. Recently, significant progress has been made in various directions. Dedicated visual and NIR test charts for lab and field testing are available and thermal test benches are on the market. Automated sensor testing using an objective synthetic human observer is within reach. Both an analytical and an image-based TOD model have recently been developed and are being implemented in the European Target Acquisition model ECOMOS and in the EOSTAR TDA. Further, the methodology is being applied for design optimization of high-end security camera systems. Finally, results from a recent perception study suggest that DRI ranges for real targets can be predicted by replacing the relevant distinctive target features by TOD test patterns of the same characteristic size and contrast, enabling a new TA modeling approach. This paper provides an overview.

  17. Damage based constitutive model for predicting the performance degradation of concrete

    Directory of Open Access Journals (Sweden)

    Zhi Wang

    Full Text Available An anisotropic elastic-damage coupled constitutive model for plain concrete is developed, which describes the concrete performance degradation. The damage variable, related to the surface density of micro-cracks and micro-voids, and represented by a second order tensor, is governed by the principal tension strain components. For adequately describing the partial crack opening/closure effect under tension and compression for concrete, a new suitable thermodynamic potential is proposed to express the state equations for modeling the mechanical behaviors. Within the frame-work of thermodynamic potential, concrete strain mechanisms are identified in the proposed anisotropic damage model while each state variable is physically explained and justified. The strain equivalence hypothesis is used for deriving the constitutive equations, which leads to the development of a decoupled algorithm for effective stress computation and damage evolution. Additionally, a detailed numerical algorithm is described and the simulations are shown for uni-axial compression, tension and multi-axial loadings. For verifying the numerical results, a series of experiments on concrete were carried out. Reasonably good agreement between experimental results and the predicted values was observed. The proposed constitutive model can be used to accurately model the concrete behaviors under uni-axial compression, tension and multi-axial loadings. Additionally, the presented work is expected to be very useful in the nonlinear finite element analysis of large-scale concrete structures.

  18. Urban climate model MUKLIMO_3 in prediction mode - evaluation of model performance based on the case study of Vienna

    Science.gov (United States)

    Hollosi, Brigitta; Zuvela-Aloise, Maja

    2017-04-01

    To reduce negative health impacts of extreme heat load in urban areas is the application of early warning systems that use weather forecast models to predict forthcoming heat events of utmost importance. In the state-of-the-art operational heat warning systems the meteorological information relies on the weather forecast from the regional numerical models and monitoring stations that do not include details of urban structure. In this study, the dynamical urban climate model MUKLIMO3 (horizontal resolution of 100 - 200 m) is initialized with the vertical profiles from the archived daily forecast data of the ZAMG from the hydrostatic ALARO numerical weather prediction model run at 0600 UTC to simulate the development of the urban heat island in Vienna on a daily basis. The aim is to evaluate the performance of the urban climate model, so far applied only for climatological studies, in a weather prediction mode using the summer period 2011-2015 as a test period. The focus of the investigation is on assessment of the urban heat load during the day-time. The model output has been evaluated against the monitoring data at the weather stations in the area of the city. The model results for daily maximum temperature show good agreement with the observations, especially at the urban and suburban stations where the mean bias is low. The results are highly dependent on the input data from the meso-scale model that leads to larger deviation from observations if the prediction is not representative for the given day. This study can be used to support urban planning strategies and to improve existing practices to alert decision-makers and the public to impending dangers of excessive heat.

  19. Prediction Model of Photovoltaic Module Temperature for Power Performance of Floating PVs

    Directory of Open Access Journals (Sweden)

    Waithiru Charles Lawrence Kamuyu

    2018-02-01

    Full Text Available Rapid reduction in the price of photovoltaic (solar PV cells and modules has resulted in a rapid increase in solar system deployments to an annual expected capacity of 200 GW by 2020. Achieving high PV cell and module efficiency is necessary for many solar manufacturers to break even. In addition, new innovative installation methods are emerging to complement the drive to lower $/W PV system price. The floating PV (FPV solar market space has emerged as a method for utilizing the cool ambient environment of the FPV system near the water surface based on successful FPV module (FPVM reliability studies that showed degradation rates below 0.5% p.a. with new encapsulation material. PV module temperature analysis is another critical area, governing the efficiency performance of solar cells and module. In this paper, data collected over five-minute intervals from a PV system over a year is analyzed. We use MATLAB to derived equation coefficients of predictable environmental variables to derive FPVM’s first module temperature operation models. When comparing the theoretical prediction to real field PV module operation temperature, the corresponding model errors range between 2% and 4% depending on number of equation coefficients incorporated. This study is useful in validation results of other studies that show FPV systems producing 10% more energy than other land based systems.

  20. Exploring uncertainty and model predictive performance concepts via a modular snowmelt-runoff modeling framework

    Science.gov (United States)

    Tyler Jon Smith; Lucy Amanda. Marshall

    2010-01-01

    Model selection is an extremely important aspect of many hydrologic modeling studies because of the complexity, variability, and uncertainty that surrounds the current understanding of watershed-scale systems. However, development and implementation of a complete precipitation-runoff modeling framework, from model selection to calibration and uncertainty analysis, are...

  1. Comparative Evaluation of the Predictive Performances of Three Different Structural Population Pharmacokinetic Models To Predict Future Voriconazole Concentrations.

    Science.gov (United States)

    Farkas, Andras; Daroczi, Gergely; Villasurda, Phillip; Dolton, Michael; Nakagaki, Midori; Roberts, Jason A

    2016-11-01

    Bayesian methods for voriconazole therapeutic drug monitoring (TDM) have been reported previously, but there are only sparse reports comparing the accuracy and precision of predictions of published models. Furthermore, the comparative accuracy of linear, mixed linear and nonlinear, or entirely nonlinear models may be of high clinical relevance. In this study, models were coded into individually designed optimum dosing strategies (ID-ODS) with voriconazole concentration data analyzed using inverse Bayesian modeling. The data used were from two independent data sets, patients with proven or suspected invasive fungal infections (n = 57) and hematopoietic stem cell transplant recipients (n = 10). Observed voriconazole concentrations were predicted whereby for each concentration value, the data available to that point were used to predict that value. The mean prediction error (ME) and mean squared prediction error (MSE) and their 95% confidence intervals (95% CI) were calculated to measure absolute bias and precision, while ΔME and ΔMSE and their 95% CI were used to measure relative bias and precision, respectively. A total of 519 voriconazole concentrations were analyzed using three models. MEs (95% CI) were 0.09 (-0.02, 0.22), 0.23 (0.04, 0.42), and 0.35 (0.16 to 0.54) while the MSEs (95% CI) were 2.1 (1.03, 3.17), 4.98 (0.90, 9.06), and 4.97 (-0.54 to 10.48) for the linear, mixed, and nonlinear models, respectively. In conclusion, while simulations with the linear model were found to be slightly more accurate and similarly precise, the small difference in accuracy is likely negligible from the clinical point of view, making all three approaches appropriate for use in a voriconazole TDM program. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  2. In vitro models for the prediction of in vivo performance of oral dosage forms

    NARCIS (Netherlands)

    Kostewicz, E.S.; Abrahamsson, B.; Brewster, M.; Brouwers, J.; Butler, J.; Carlert, S.; Dickinson, P.A.; Dressman, J.; Holm, R.; Klein, S.; Mann, J.; McAllister, M.; Minekus, M.; Muenster, U.; Müllertz, A.; Verwei, M.; Vertzoni, M.; Weitschies, W.; Augustijns, P.

    2014-01-01

    Accurate prediction of the in vivo biopharmaceutical performance of oral drug formulations is critical to efficient drug development. Traditionally, in vitro evaluation of oral drug formulations has focused on disintegration and dissolution testing for quality control (QC) purposes. The connection

  3. In vitro models for the prediction of in vivo performance of oral dosage forms

    DEFF Research Database (Denmark)

    Kostewicz, Edmund S; Abrahamsson, Bertil; Brewster, Marcus

    2014-01-01

    Accurate prediction of the in vivo biopharmaceutical performance of oral drug formulations is critical to efficient drug development. Traditionally, in vitro evaluation of oral drug formulations has focused on disintegration and dissolution testing for quality control (QC) purposes. The connectio...

  4. Performance prediction and validation of equilibrium modeling for gasification of cashew nut shell char

    Directory of Open Access Journals (Sweden)

    M. Venkata Ramanan

    2008-09-01

    Full Text Available Cashew nut shell, a waste product obtained during deshelling of cashew kernels, had in the past been deemed unfit as a fuel for gasification owing to its high occluded oil content. The oil, a source of natural phenol, oozes upon gasification, thereby clogging the gasifier throat, downstream equipment and associated utilities with oil, resulting in ineffective gasification and premature failure of utilities due to its corrosive characteristics. To overcome this drawback, the cashew shells were de-oiled by charring in closed chambers and were subsequently gasified in an autothermal downdraft gasifier. Equilibrium modeling was carried out to predict the producer gas composition under varying performance influencing parameters, viz., equivalence ratio (ER, reaction temperature (RT and moisture content (MC. The results were compared with the experimental output and are presented in this paper. The model is quite satisfactory with the experimental outcome at the ER applicable to gasification systems, i.e., 0.15 to 0.30. The results show that the mole fraction of (i H2, CO and CH4 decreases while (N2 + H2O and CO2 increases with ER, (ii H2 and CO increases while CH4, (N2 + H2O and CO2 decreases with reaction temperature, (iii H2, CH4, CO2 and (N2 + H2O increases while CO decreases with moisture content. However at an equivalence ratio less than 0.15, the model predicts an unrealistic composition and is observed to be non valid below this ER.

  5. Predictive performance for population models using stochastic differential equations applied on data from an oral glucose tolerance test

    DEFF Research Database (Denmark)

    Møller, Jonas Bech; Overgaard, R.V.; Madsen, Henrik

    2010-01-01

    Several articles have investigated stochastic differential equations (SDEs) in PK/PD models, but few have quantitatively investigated the benefits to predictive performance of models based on real data. Estimation of first phase insulin secretion which reflects beta-cell function using models...

  6. Optimization of Computational Performance and Accuracy in 3-D Transient CFD Model for CFB Hydrodynamics Predictions

    Science.gov (United States)

    Rampidis, I.; Nikolopoulos, A.; Koukouzas, N.; Grammelis, P.; Kakaras, E.

    2007-09-01

    This work aims to present a pure 3-D CFD model, accurate and efficient, for the simulation of a pilot scale CFB hydrodynamics. The accuracy of the model was investigated as a function of the numerical parameters, in order to derive an optimum model setup with respect to computational cost. The necessity of the in depth examination of hydrodynamics emerges by the trend to scale up CFBCs. This scale up brings forward numerous design problems and uncertainties, which can be successfully elucidated by CFD techniques. Deriving guidelines for setting a computational efficient model is important as the scale of the CFBs grows fast, while computational power is limited. However, the optimum efficiency matter has not been investigated thoroughly in the literature as authors were more concerned for their models accuracy and validity. The objective of this work is to investigate the parameters that influence the efficiency and accuracy of CFB computational fluid dynamics models, find the optimum set of these parameters and thus establish this technique as a competitive method for the simulation and design of industrial, large scale beds, where the computational cost is otherwise prohibitive. During the tests that were performed in this work, the influence of turbulence modeling approach, time and space density and discretization schemes were investigated on a 1.2 MWth CFB test rig. Using Fourier analysis dominant frequencies were extracted in order to estimate the adequate time period for the averaging of all instantaneous values. The compliance with the experimental measurements was very good. The basic differences between the predictions that arose from the various model setups were pointed out and analyzed. The results showed that a model with high order space discretization schemes when applied on a coarse grid and averaging of the instantaneous scalar values for a 20 sec period, adequately described the transient hydrodynamic behaviour of a pilot CFB while the

  7. A prediction model to identify hospitalised, older adults with reduced physical performance.

    Science.gov (United States)

    Bruun, Inge H; Maribo, Thomas; Nørgaard, Birgitte; Schiøttz-Christensen, Berit; Mogensen, Christian B

    2017-12-07

    Identifying older adults with reduced physical performance at the time of hospital admission can significantly affect patient management and trajectory. For example, such patients could receive targeted hospital interventions such as routine mobilisation. Furthermore, at the time of discharge, health systems could offer these patients additional therapy to maintain or improve health and prevent institutionalisation or readmission. The principle aim of this study was to identify predictors for persisting, reduced physical performance in older adults following acute hospitalisation. This was a prospective cohort study that enrolled 117 medical patients, ages 65 or older, who were admitted to a short-stay unit in a Danish emergency department. Patients were included in the study if at the time of admission they performed ≤8 repetitions in the 30-s Chair-Stand Test (30s-CST). The primary outcome measure was the number of 30s-CST repetitions (≤ 8 or >8) performed at the time of follow-up, 34 days after admission. Potential predictors within the first 48 h of admission included: age, gender, ability to climb stairs and walk 400 m, difficulties with activities of daily living before admission, falls, physical activity level, self-rated health, use of a walking aid before admission, number of prescribed medications, 30s-CST, and the De Morton Mobility Index. A total of 78 (67%) patients improved in physical performance in the interval between admission and follow-up assessment, but 76 patients (65%) had persistent reduced physical performance when compared to their baseline (30s-CST ≤ 8). The number of potential predictors was reduced in order to create a simplified prediction model based on 4 variables, namely the use of a walking aid before hospitalisation (score = 1.5), a 30s-CST ≤ 5 (1.8), age > 85 (0.1), and female gender (0.6). A score > 1.8 identified 78% of the older adults who continued to have reduced physical performance following

  8. Significance of uncertainties derived from settling tank model structure and parameters on predicting WWTP performance - A global sensitivity analysis study

    DEFF Research Database (Denmark)

    Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen

    2011-01-01

    Uncertainty derived from one of the process models – such as one-dimensional secondary settling tank (SST) models – can impact the output of the other process models, e.g., biokinetic (ASM1), as well as the integrated wastewater treatment plant (WWTP) models. The model structure and parameter...... uncertainty of settler models can therefore propagate, and add to the uncertainties in prediction of any plant performance criteria. Here we present an assessment of the relative significance of secondary settling model performance in WWTP simulations. We perform a global sensitivity analysis (GSA) based....... The outcome of this study contributes to a better understanding of uncertainty in WWTPs, and explicitly demonstrates the significance of secondary settling processes that are crucial elements of model prediction under dry and wet-weather loading conditions....

  9. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  10. Mortality prediction models for pediatric intensive care: comparison of overall and subgroup specific performance

    NARCIS (Netherlands)

    Visser, Idse H. E.; Hazelzet, Jan A.; Albers, Marcel J. I. J.; Verlaat, Carin W. M.; Hogenbirk, Karin; van Woensel, Job B.; van Heerde, Marc; van Waardenburg, Dick A.; Jansen, Nicolaas J. G.; Steyerberg, Ewout W.

    2013-01-01

    To validate paediatric index of mortality (PIM) and pediatric risk of mortality (PRISM) models within the overall population as well as in specific subgroups in pediatric intensive care units (PICUs). Variants of PIM and PRISM prediction models were compared with respect to calibration (agreement

  11. Mortality prediction models for pediatric intensive care : comparison of overall and subgroup specific performance

    NARCIS (Netherlands)

    Visser, Idse H. E.; Hazelzet, Jan A.; Albers, Marcel J. I. J.; Verlaat, Carin W. M.; Hogenbirk, Karin; van Woensel, Job B.; van Heerde, Marc; van Waardenburg, Dick A.; Jansen, Nicolaas J. G.; Steyerberg, Ewout W.

    To validate paediatric index of mortality (PIM) and pediatric risk of mortality (PRISM) models within the overall population as well as in specific subgroups in pediatric intensive care units (PICUs). Variants of PIM and PRISM prediction models were compared with respect to calibration (agreement

  12. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    Directory of Open Access Journals (Sweden)

    Mindy M Syfert

    Full Text Available Species distribution models (SDMs trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a choosing to correct for geographical sampling bias and (b using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt. In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  13. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    Science.gov (United States)

    Syfert, Mindy M; Smith, Matthew J; Coomes, David A

    2013-01-01

    Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  14. Comparison of ITER performance predicted by semi-empirical and theory-based transport models

    International Nuclear Information System (INIS)

    Mukhovatov, V.; Shimomura, Y.; Polevoi, A.

    2003-01-01

    The values of Q=(fusion power)/(auxiliary heating power) predicted for ITER by three different methods, i.e., transport model based on empirical confinement scaling, dimensionless scaling technique, and theory-based transport models are compared. The energy confinement time given by the ITERH-98(y,2) scaling for an inductive scenario with plasma current of 15 MA and plasma density 15% below the Greenwald value is 3.6 s with one technical standard deviation of ±14%. These data are translated into a Q interval of [7-13] at the auxiliary heating power P aux = 40 MW and [7-28] at the minimum heating power satisfying a good confinement ELMy H-mode. Predictions of dimensionless scalings and theory-based transport models such as Weiland, MMM and IFS/PPPL overlap with the empirical scaling predictions within the margins of uncertainty. (author)

  15. Comparison of the Predictive Performance and Interpretability of Random Forest and Linear Models on Benchmark Data Sets.

    Science.gov (United States)

    Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan

    2017-08-28

    The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical

  16. Prediction of maize single-cross performance by mixed linear models with microsatellite marker information.

    Science.gov (United States)

    Balestre, M; Von Pinho, R G; Souza, J C

    2010-06-11

    We evaluated the potential of the best linear unbiased predictor (BLUP) along with the relationship coefficient for predicting the performance of untested maize single-cross hybrids. Ninety S(0:2) progenies arising from three single-cross hybrids were used. The 90 progenies were genotyped with 25 microsatellite markers, with nine markers linked to quantitative trait loci for grain yield. Based on genetic similarities, 17 partial inbred lines were selected and crossed in a partial diallel design. Similarity and relationship coefficients were used to construct the additive and dominance genetic matrices; along with BLUP, they provided predictions for untested single-crosses. Five degrees of imbalance were simulated (5, 10, 20, 30, and 40 hybrids). The correlation values between the predicted genotypic values and the observed phenotypic means varied from 0.55 to 0.70, depending on the degree of imbalance. A similar result was observed for the specific combining ability predictions; they varied from 0.61 to 0.70. It was also found that the relationship coefficient based on BLUP provided more accurate predictions than similarity-in-state predictions. We conclude that BLUP methodology is a viable alternative for the prediction of untested crosses in early progenies.

  17. Predicting Performance Ratings Using Motivational Antecedents

    National Research Council Canada - National Science Library

    Zazania, Michelle

    1998-01-01

    .... LISREL8 was used to test a path model predicting performance ratings. Results showed observer ratings of effort and self-reported task sell-efficacy played a role in predicting ratings of task-specific performance...

  18. Gate valve performance prediction

    International Nuclear Information System (INIS)

    Harrison, D.H.; Damerell, P.S.; Wang, J.K.; Kalsi, M.S.; Wolfe, K.J.

    1994-01-01

    The Electric Power Research Institute is carrying out a program to improve the performance prediction methods for motor-operated valves. As part of this program, an analytical method to predict the stem thrust required to stroke a gate valve has been developed and has been assessed against data from gate valve tests. The method accounts for the loads applied to the disc by fluid flow and for the detailed mechanical interaction of the stem, disc, guides, and seats. To support development of the method, two separate-effects test programs were carried out. One test program determined friction coefficients for contacts between gate valve parts by using material specimens in controlled environments. The other test program investigated the interaction of the stem, disc, guides, and seat using a special fixture with full-sized gate valve parts. The method has been assessed against flow-loop and in-plant test data. These tests include valve sizes from 3 to 18 in. and cover a considerable range of flow, temperature, and differential pressure. Stem thrust predictions for the method bound measured results. In some cases, the bounding predictions are substantially higher than the stem loads required for valve operation, as a result of the bounding nature of the friction coefficients in the method

  19. Performance Assessment of Turbulence Models for the Prediction of the Reactor Internal Flow in the Scale-down APR+

    International Nuclear Information System (INIS)

    Lee, Gonghee; Bang, Youngseok; Woo, Swengwoong; Kim, Dohyeong; Kang, Minku

    2013-01-01

    The types of errors in CFD simulation can be divided into the two main categories: numerical errors and model errors. Turbulence model is one of the important sources for model errors. In this study, in order to assess the prediction performance of Reynolds-averaged Navier-Stokes (RANS)-based two equations turbulence models for the analysis of flow distribution inside a 1/5 scale-down APR+, the simulation was conducted with the commercial CFD software, ANSYS CFX V. 14. In this study, in order to assess the prediction performance of turbulence models for the analysis of flow distribution inside a 1/5 scale-down APR+, the simulation was conducted with the commercial CFD software, ANSYS CFX V. 14. Both standard k-ε model and SST model predicted the similar flow pattern inside reactor. Therefore it was concluded that the prediction performance of both turbulence models was nearly same. Complex thermal-hydraulic characteristics exist inside reactor because the reactor internals consist of fuel assembly, control rod assembly, and the internal structures. Either flow distribution test for the scale-down reactor model or computational fluid dynamics (CFD) simulation have been conducted to understand these complex thermal-hydraulic features inside reactor

  20. Development of a Stochastically-driven, Forward Predictive Performance Model for PEMFCs

    Science.gov (United States)

    Harvey, David Benjamin Paul

    A one-dimensional multi-scale coupled, transient, and mechanistic performance model for a PEMFC membrane electrode assembly has been developed. The model explicitly includes each of the 5 layers within a membrane electrode assembly and solves for the transport of charge, heat, mass, species, dissolved water, and liquid water. Key features of the model include the use of a multi-step implementation of the HOR reaction on the anode, agglomerate catalyst sub-models for both the anode and cathode catalyst layers, a unique approach that links the composition of the catalyst layer to key properties within the agglomerate model and the implementation of a stochastic input-based approach for component material properties. The model employs a new methodology for validation using statistically varying input parameters and statistically-based experimental performance data; this model represents the first stochastic input driven unit cell performance model. The stochastic input driven performance model was used to identify optimal ionomer content within the cathode catalyst layer, demonstrate the role of material variation in potential low performing MEA materials, provide explanation for the performance of low-Pt loaded MEAs, and investigate the validity of transient-sweep experimental diagnostic methods.

  1. Prediction models for performance and emissions of a dual fuel CI ...

    Indian Academy of Sciences (India)

    use artificial intelligence modelling techniques like fuzzy logic, Artificial Neural Net- work (ANN), Genetic Algorithm (GA), etc. This paper uses a neuro fuzzy modelling technique, Adaptive Neuro Fuzzy Inference System (ANFIS) for developing predic- tion models for performance and emission parameter of a dual fuel engine.

  2. Prediction Performance of an Artificial Neural Network Model for the Amount of Cooling Energy Consumption in Hotel Rooms

    Directory of Open Access Journals (Sweden)

    Jin Woo Moon

    2015-08-01

    Full Text Available This study was conducted to develop an artificial neural network (ANN-based prediction model that can calculate the amount of cooling energy during the setback period of accommodation buildings. By comparing the amount of energy needed for diverse setback temperatures, the most energy-efficient optimal setback temperature could be found and applied in the thermal control logic. Three major processes that used the numerical simulation method were conducted for the development and optimization of an ANN model and for the testing of its prediction performance, respectively. First, the structure and learning method of the initial ANN model was determined to predict the amount of cooling energy consumption during the setback period. Then, the initial structure and learning methods of the ANN model were optimized using parametrical analysis to compare its prediction accuracy levels. Finally, the performance tests of the optimized model proved its prediction accuracy with the lower coefficient of variation of the root mean square errors (CVRMSEs of the simulated results and the predicted results under generally accepted levels. In conclusion, the proposed ANN model proved its potential to be applied to the thermal control logic for setting up the most energy-efficient setback temperature.

  3. A cycle simulation model for predicting the performance of a diesel engine fuelled by diesel and biodiesel blends

    International Nuclear Information System (INIS)

    Gogoi, T.K.; Baruah, D.C.

    2010-01-01

    Among the alternative fuels, biodiesel and its blends are considered suitable and the most promising fuel for diesel engine. The properties of biodiesel are found similar to that of diesel. Many researchers have experimentally evaluated the performance characteristics of conventional diesel engines fuelled by biodiesel and its blends. However, experiments require enormous effort, money and time. Hence, a cycle simulation model incorporating a thermodynamic based single zone combustion model is developed to predict the performance of diesel engine. The effect of engine speed and compression ratio on brake power and brake thermal efficiency is analysed through the model. The fuel considered for the analysis are diesel, 20%, 40%, 60% blending of diesel and biodiesel derived from Karanja oil (Pongamia Glabra). The model predicts similar performance with diesel, 20% and 40% blending. However, with 60% blending, it reveals better performance in terms of brake power and brake thermal efficiency.

  4. A model for sequential decoding overflow due to a noisy carrier reference. [communication performance prediction

    Science.gov (United States)

    Layland, J. W.

    1974-01-01

    An approximate analysis of the effect of a noisy carrier reference on the performance of sequential decoding is presented. The analysis uses previously developed techniques for evaluating noisy reference performance for medium-rate uncoded communications adapted to sequential decoding for data rates of 8 to 2048 bits/s. In estimating the ten to the minus fourth power deletion probability thresholds for Helios, the model agrees with experimental data to within the experimental tolerances. The computational problem involved in sequential decoding, carrier loop effects, the main characteristics of the medium-rate model, modeled decoding performance, and perspectives on future work are discussed.

  5. High-Performance First-Principles Molecular Dynamics for Predictive Theory and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Gygi, Francois [Univ. of California, Davis, CA (United States). Dept. of Computer Science; Galli, Giulia [Univ. of Chicago, IL (United States); Schwegler, Eric [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-12-03

    This project focused on developing high-performance software tools for First-Principles Molecular Dynamics (FPMD) simulations, and applying them in investigations of materials relevant to energy conversion processes. FPMD is an atomistic simulation method that combines a quantum-mechanical description of electronic structure with the statistical description provided by molecular dynamics (MD) simulations. This reliance on fundamental principles allows FPMD simulations to provide a consistent description of structural, dynamical and electronic properties of a material. This is particularly useful in systems for which reliable empirical models are lacking. FPMD simulations are increasingly used as a predictive tool for applications such as batteries, solar energy conversion, light-emitting devices, electro-chemical energy conversion devices and other materials. During the course of the project, several new features were developed and added to the open-source Qbox FPMD code. The code was further optimized for scalable operation of large-scale, Leadership-Class DOE computers. When combined with Many-Body Perturbation Theory (MBPT) calculations, this infrastructure was used to investigate structural and electronic properties of liquid water, ice, aqueous solutions, nanoparticles and solid-liquid interfaces. Computing both ionic trajectories and electronic structure in a consistent manner enabled the simulation of several spectroscopic properties, such as Raman spectra, infrared spectra, and sum-frequency generation spectra. The accuracy of the approximations used allowed for direct comparisons of results with experimental data such as optical spectra, X-ray and neutron diffraction spectra. The software infrastructure developed in this project, as applied to various investigations of solids, liquids and interfaces, demonstrates that FPMD simulations can provide a detailed, atomic-scale picture of structural, vibrational and electronic properties of complex systems

  6. Predicting detection performance with model observers: Fourier domain or spatial domain?

    Science.gov (United States)

    Chen, Baiyu; Yu, Lifeng; Leng, Shuai; Kofler, James; Favazza, Christopher; Vrieze, Thomas; McCollough, Cynthia

    2016-02-27

    The use of Fourier domain model observer is challenged by iterative reconstruction (IR), because IR algorithms are nonlinear and IR images have noise texture different from that of FBP. A modified Fourier domain model observer, which incorporates nonlinear noise and resolution properties, has been proposed for IR and needs to be validated with human detection performance. On the other hand, the spatial domain model observer is theoretically applicable to IR, but more computationally intensive than the Fourier domain method. The purpose of this study is to compare the modified Fourier domain model observer to the spatial domain model observer with both FBP and IR images, using human detection performance as the gold standard. A phantom with inserts of various low contrast levels and sizes was repeatedly scanned 100 times on a third-generation, dual-source CT scanner at 5 dose levels and reconstructed using FBP and IR algorithms. The human detection performance of the inserts was measured via a 2-alternative-forced-choice (2AFC) test. In addition, two model observer performances were calculated, including a Fourier domain non-prewhitening model observer and a spatial domain channelized Hotelling observer. The performance of these two mode observers was compared in terms of how well they correlated with human observer performance. Our results demonstrated that the spatial domain model observer correlated well with human observers across various dose levels, object contrast levels, and object sizes. The Fourier domain observer correlated well with human observers using FBP images, but overestimated the detection performance using IR images.

  7. Identifying the Gene Signatures from Gene-Pathway Bipartite Network Guarantees the Robust Model Performance on Predicting the Cancer Prognosis

    Directory of Open Access Journals (Sweden)

    Li He

    2014-01-01

    Full Text Available For the purpose of improving the prediction of cancer prognosis in the clinical researches, various algorithms have been developed to construct the predictive models with the gene signatures detected by DNA microarrays. Due to the heterogeneity of the clinical samples, the list of differentially expressed genes (DEGs generated by the statistical methods or the machine learning algorithms often involves a number of false positive genes, which are not associated with the phenotypic differences between the compared clinical conditions, and subsequently impacts the reliability of the predictive models. In this study, we proposed a strategy, which combined the statistical algorithm with the gene-pathway bipartite networks, to generate the reliable lists of cancer-related DEGs and constructed the models by using support vector machine for predicting the prognosis of three types of cancers, namely, breast cancer, acute myeloma leukemia, and glioblastoma. Our results demonstrated that, combined with the gene-pathway bipartite networks, our proposed strategy can efficiently generate the reliable cancer-related DEG lists for constructing the predictive models. In addition, the model performance in the swap analysis was similar to that in the original analysis, indicating the robustness of the models in predicting the cancer outcomes.

  8. Predictive models for observer performance in CT: applications in protocol optimization

    Science.gov (United States)

    Richard, S.; Li, X.; Yadava, G.; Samei, E.

    2011-03-01

    The relationship between theoretical descriptions of imaging performance (Fourier-based) and the performance of real human observers was investigated for detection tasks in multi-slice CT. The detectability index for the Fisher-Hotelling model observer and non-prewhitening model observer (with and without internal noise and eye filter) was computed using: 1) the measured modulation transfer function (MTF) and noise-power spectrum (NPS) for CT; and 2) a Fourier description of imaging task. Based upon CT images of human patients with added simulated lesions, human observer performance was assessed via an observer study in terms of the area under the ROC curve (Az). The degree to which the detectability index correlated with human observer performance was investigated and results for the non-prewhitening model observer with internal noise and eye filter (NPWE) were found to agree best with human performance over a broad range of imaging conditions. Results provided initial validation that CT image acquisition and reconstruction parameters can be optimized for observer performance rather than system performance (i.e., contrast-to-noise ratio, MTF, and NPS). The NPWE model was further applied for the comparison of FBP with a novel modelbased iterative reconstruction algorithm to assess its potential for dose reduction.

  9. Comparison of HSPF and SWAT models performance for runoff and sediment yield prediction.

    Science.gov (United States)

    Im, Sangjun; Brannan, Kevin M; Mostaghimi, Saied; Kim, Sang Min

    2007-09-01

    A watershed model can be used to better understand the relationship between land use activities and hydrologic/water quality processes that occur within a watershed. The physically based, distributed parameter model (SWAT) and a conceptual, lumped parameter model (HSPF), were selected and their performance were compared in simulating runoff and sediment yields from the Polecat Creek watershed in Virginia, which is 12,048 ha in size. A monitoring project was conducted in Polecat Creek watershed during the period of October 1994 to June 2000. The observed data (stream flow and sediment yield) from the monitoring project was used in the calibration/validations of the models. The period of September 1996 to June 2000 was used for the calibration and October 1994 to December 1995 was used for the validation of the models. The outputs from the models were compared to the observed data at several sub-watershed outlets and at the watershed outlet of the Polecat Creek watershed. The results indicated that both models were generally able to simulate stream flow and sediment yields well during both the calibration/validation periods. For annual and monthly loads, HSPF simulated hydrologic and sediment yield more accurately than SWAT at all monitoring sites within the watershed. The results of this study indicate that both the SWAT and HSPF watershed models performed sufficiently well in the simulation of stream flow and sediment yield with HSPF performing moderately better than SWAT for simulation time-steps greater than a month.

  10. Assessment of three turbulence model performances in predicting water jet flow plunging into a liquid pool

    Directory of Open Access Journals (Sweden)

    Zidouni Kendil Faiza

    2010-01-01

    Full Text Available The main purpose of the current study is to numerically investigate, through computational fluid dynamics modeling, a water jet injected vertically downward through a straight circular pipe into a water bath. The study also aims to obtain a better understanding of jet behavior, air entrainment and the dispersion of bubbles in the developing flow region. For these purposes, three dimensional air and water flows were modeled using the volume of fluid technique. The equations in question were formulated using the density and viscosity of a 'gas-liquid mixture', described in terms of the phase volume fraction. Three turbulence models with a high Reynolds number have been considered i. e. the standard k-e model, realizable k-e model, and Reynolds stress model. The predicted flow patterns for the realizable k-e model match well with experimental measurements found in available literature. Nevertheless, some discrepancies regarding velocity relaxation and turbulent momentum distribution in the pool are still observed for both the standard k-e and the Reynolds stress model.

  11. A Prediction Model for the High-Temperature Performance of Limp Coal Used in Corex

    Science.gov (United States)

    She, Yuan; Liu, Qihang; Wu, Keng; Ren, Hailiang

    By observing different coal char's microstructures and microscopic morphologies during the coking process, the characteristics of COREX coal char's microstructure and properties were analyzed. MCRI and MCSR were used to respectively describe the reactivity and the strength after reaction of coal char, with those of BF's coke for reference, and then the physical meaning of MCRI and MCSR were given. MCRI and MCSR were determined by the composition of coal char. For the two kinds of COREX coal, the formula to forecast the hot performance of coal char was set up by making a multiple linear regression between the hot performance and microstructure of the coal char at 1100°C, and the predictive values of coal char's hot performance at 600°C, 800°C and 1000°C coincided well with the measured values.

  12. Performance of predictive models in phase equilibria of complex associating systems: PC-SAFT and CEOS/GE

    Directory of Open Access Journals (Sweden)

    N. Bender

    2013-03-01

    Full Text Available Cubic equations of state combined with excess Gibbs energy predictive models (like UNIFAC and equations of state based on applied statistical mechanics are among the main alternatives for phase equilibria prediction involving polar substances in wide temperature and pressure ranges. In this work, the predictive performances of the PC-SAFT with association contribution and Peng-Robinson (PR combined with UNIFAC (Do through mixing rules are compared. Binary and multi-component systems involving polar and non-polar substances were analyzed. Results were also compared to experimental data available in the literature. Results show a similar predictive performance for PC-SAFT with association and cubic equations combined with UNIFAC (Do through mixing rules. Although PC-SAFT with association requires less parameters, it is more complex and requires more computation time.

  13. TRITIUM RESERVOIR STRUCTURAL PERFORMANCE PREDICTION

    Energy Technology Data Exchange (ETDEWEB)

    Lam, P.S.; Morgan, M.J

    2005-11-10

    The burst test is used to assess the material performance of tritium reservoirs in the surveillance program in which reservoirs have been in service for extended periods of time. A materials system model and finite element procedure were developed under a Savannah River Site Plant-Directed Research and Development (PDRD) program to predict the structural response under a full range of loading and aged material conditions of the reservoir. The results show that the predicted burst pressure and volume ductility are in good agreement with the actual burst test results for the unexposed units. The material tensile properties used in the calculations were obtained from a curved tensile specimen harvested from a companion reservoir by Electric Discharge Machining (EDM). In the absence of exposed and aged material tensile data, literature data were used for demonstrating the methodology in terms of the helium-3 concentration in the metal and the depth of penetration in the reservoir sidewall. It can be shown that the volume ductility decreases significantly with the presence of tritium and its decay product, helium-3, in the metal, as was observed in the laboratory-controlled burst tests. The model and analytical procedure provides a predictive tool for reservoir structural integrity under aging conditions. It is recommended that benchmark tests and analysis for aged materials be performed. The methodology can be augmented to predict performance for reservoir with flaws.

  14. An evaluation of 1D loss model collections for the off-design performance prediction of automotive turbocharger compressors

    International Nuclear Information System (INIS)

    Harley, P; Spence, S; Early, J; Filsinger, D; Dietrich, M

    2013-01-01

    Single-zone modelling is used to assess different collections of impeller 1D loss models. Three collections of loss models have been identified in literature, and the background to each of these collections is discussed. Each collection is evaluated using three modern automotive turbocharger style centrifugal compressors; comparisons of performance for each of the collections are made. An empirical data set taken from standard hot gas stand tests for each turbocharger is used as a baseline for comparison. Compressor range is predicted in this study; impeller diffusion ratio is shown to be a useful method of predicting compressor surge in 1D, and choke is predicted using basic compressible flow theory. The compressor designer can use this as a guide to identify the most compatible collection of losses for turbocharger compressor design applications. The analysis indicates the most appropriate collection for the design of automotive turbocharger centrifugal compressors

  15. Validation of the hospital outcome prediction equation (HOPE) model for monitoring clinical performance.

    Science.gov (United States)

    Duke, G J; Graco, M; Santamaria, J; Shann, F

    2009-05-01

    The aim of this study was to validate a risk-adjusted hospital outcome prediction equation (HOPE) using a statewide administrative dataset. Retrospective observational study using multivariate logistic regression modelling. Calibration and discrimination were assessed by standardized mortality ratio (SMR), area under the receiver operating characteristic plot (ROC AUC), Hosmer-Lemeshow contingency tables and goodness-of-fit statistic in an independent dataset, and in all 23 important tertiary, metropolitan and regional hospitals. The dependent variable was in-hospital death. All consecutive adult hospital separations between 1 July 2004 and 30 June 2006, excluding obstetric and day-case only admissions, from all acute health services within the State of Victoria, Australia were included. A total of 379 676 consecutive records (1 July 2004 to 30 June 2005) was used to derive the HOPE model. Six variables (age, male sex, admission diagnosis, emergency admission, aged-care resident and inter-hospital transfer) were selected for inclusion in the final model. It was validated in the 384 489 consecutive records from the following year (1 July 2005 to 30 June 2006). The 95% confidence interval for the SMR was 0.98-1.02, and for the ROC AUC, 0.87-0.88. Discrimination and (one or more) calibration criteria were achieved in 22 (96%) of the 23 hospitals. The HOPE model is a simple risk-adjusted outcome prediction tool, based on six variables from data that are routinely collected for administrative purposes and appears to be a reliable predictor of hospital outcome.

  16. Prediction of mill performance

    Energy Technology Data Exchange (ETDEWEB)

    P.A. Bennett [CoalTech Pty Ltd. (Australia)

    2005-07-01

    This Australian Coal Association Research Program (ACARP) project aimed to demonstrate that the Hardgrove Grindability Index (HGI) coupled with standard Petrographic Analysis can be used to greatly improve the prediction of mill power requirements, mill throughput and product size. The project examined the mill test data from ACIRL's pilot scale vertical spindle mill on 96 coals. A total of 360 mill tests, conducted under a wide range of throughputs, roll pressures and classifier settings, were included into the data set. The mill performance of maceral groups or microlithotypes was assumed to be additive, that is, each maceral group or microlithotype behaved independently and a size fraction of the product PF was the volume weighted sum of the petrographic components of that size fraction. Based on this assumption it was possible to determine the size distribution of the product PF, for a wide range of milling conditions, based solely on petrographic analysis. Microlithotypes were not determined directly but were estimated from the maceral analysis. The size distribution of individual maceral groups or microlithotypes can also be estimated based on developed correlations. Size distribution determined from petrographic analysis proved to be a better estimate than that determined from the HGI. Mill power can be estimated from petrographic analysis, but the HGI was found to be a better predictor of mill power. 19 refs., 4 figs., 1 tab.

  17. Predicting the Consequences of Workload Management Strategies with Human Performance Modeling

    Science.gov (United States)

    Mitchell, Diane Kuhl; Samma, Charneta

    2011-01-01

    Human performance modelers at the US Army Research Laboratory have developed an approach for establishing Soldier high workload that can be used for analyses of proposed system designs. Their technique includes three key components. To implement the approach in an experiment, the researcher would create two experimental conditions: a baseline and a design alternative. Next they would identify a scenario in which the test participants perform all their representative concurrent interactions with the system. This scenario should include any events that would trigger a different set of goals for the human operators. They would collect workload values during both the control and alternative design condition to see if the alternative increased workload and decreased performance. They have successfully implemented this approach for military vehicle. designs using the human performance modeling tool, IMPRINT. Although ARL researches use IMPRINT to implement their approach, it can be applied to any workload analysis. Researchers using other modeling and simulations tools or conducting experiments or field tests can use the same approach.

  18. Performance of Reynolds Averaged Navier-Stokes Models in Predicting Separated Flows: Study of the Hump Flow Model Problem

    Science.gov (United States)

    Cappelli, Daniele; Mansour, Nagi N.

    2012-01-01

    Separation can be seen in most aerodynamic flows, but accurate prediction of separated flows is still a challenging problem for computational fluid dynamics (CFD) tools. The behavior of several Reynolds Averaged Navier-Stokes (RANS) models in predicting the separated ow over a wall-mounted hump is studied. The strengths and weaknesses of the most popular RANS models (Spalart-Allmaras, k-epsilon, k-omega, k-omega-SST) are evaluated using the open source software OpenFOAM. The hump ow modeled in this work has been documented in the 2004 CFD Validation Workshop on Synthetic Jets and Turbulent Separation Control. Only the baseline case is treated; the slot flow control cases are not considered in this paper. Particular attention is given to predicting the size of the recirculation bubble, the position of the reattachment point, and the velocity profiles downstream of the hump.

  19. Predictive Maturity of Multi-Scale Simulation Models for Fuel Performance

    International Nuclear Information System (INIS)

    Atamturktur, Sez; Unal, Cetin; Hemez, Francois; Williams, Brian; Tome, Carlos

    2015-01-01

    The project proposed to provide a Predictive Maturity Framework with its companion metrics that (1) introduce a formalized, quantitative means to communicate information between interested parties, (2) provide scientifically dependable means to claim completion of Validation and Uncertainty Quantification (VU) activities, and (3) guide the decision makers in the allocation of Nuclear Energy's resources for code development and physical experiments. The project team proposed to develop this framework based on two complimentary criteria: (1) the extent of experimental evidence available for the calibration of simulation models and (2) the sophistication of the physics incorporated in simulation models. The proposed framework is capable of quantifying the interaction between the required number of physical experiments and degree of physics sophistication. The project team has developed this framework and implemented it with a multi-scale model for simulating creep of a core reactor cladding. The multi-scale model is composed of the viscoplastic self-consistent (VPSC) code at the meso-scale, which represents the visco-plastic behavior and changing properties of a highly anisotropic material and a Finite Element (FE) code at the macro-scale to represent the elastic behavior and apply the loading. The framework developed takes advantage of the transparency provided by partitioned analysis, where independent constituent codes are coupled in an iterative manner. This transparency allows model developers to better understand and remedy the source of biases and uncertainties, whether they stem from the constituents or the coupling interface by exploiting separate-effect experiments conducted within the constituent domain and integral-effect experiments conducted within the full-system domain. The project team has implemented this procedure with the multi- scale VPSC-FE model and demonstrated its ability to improve the predictive capability of the model. Within this

  20. Predictive Maturity of Multi-Scale Simulation Models for Fuel Performance

    Energy Technology Data Exchange (ETDEWEB)

    Atamturktur, Sez [Clemson Univ., SC (United States); Unal, Cetin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hemez, Francois [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Brian [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tome, Carlos [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-16

    The project proposed to provide a Predictive Maturity Framework with its companion metrics that (1) introduce a formalized, quantitative means to communicate information between interested parties, (2) provide scientifically dependable means to claim completion of Validation and Uncertainty Quantification (VU) activities, and (3) guide the decision makers in the allocation of Nuclear Energy’s resources for code development and physical experiments. The project team proposed to develop this framework based on two complimentary criteria: (1) the extent of experimental evidence available for the calibration of simulation models and (2) the sophistication of the physics incorporated in simulation models. The proposed framework is capable of quantifying the interaction between the required number of physical experiments and degree of physics sophistication. The project team has developed this framework and implemented it with a multi-scale model for simulating creep of a core reactor cladding. The multi-scale model is composed of the viscoplastic self-consistent (VPSC) code at the meso-scale, which represents the visco-plastic behavior and changing properties of a highly anisotropic material and a Finite Element (FE) code at the macro-scale to represent the elastic behavior and apply the loading. The framework developed takes advantage of the transparency provided by partitioned analysis, where independent constituent codes are coupled in an iterative manner. This transparency allows model developers to better understand and remedy the source of biases and uncertainties, whether they stem from the constituents or the coupling interface by exploiting separate-effect experiments conducted within the constituent domain and integral-effect experiments conducted within the full-system domain. The project team has implemented this procedure with the multi- scale VPSC-FE model and demonstrated its ability to improve the predictive capability of the model. Within this

  1. State-Space Modeling and Performance Analysis of Variable-Speed Wind Turbine Based on a Model Predictive Control Approach

    Directory of Open Access Journals (Sweden)

    H. Bassi

    2017-04-01

    Full Text Available Advancements in wind energy technologies have led wind turbines from fixed speed to variable speed operation. This paper introduces an innovative version of a variable-speed wind turbine based on a model predictive control (MPC approach. The proposed approach provides maximum power point tracking (MPPT, whose main objective is to capture the maximum wind energy in spite of the variable nature of the wind’s speed. The proposed MPC approach also reduces the constraints of the two main functional parts of the wind turbine: the full load and partial load segments. The pitch angle for full load and the rotating force for the partial load have been fixed concurrently in order to balance power generation as well as to reduce the operations of the pitch angle. A mathematical analysis of the proposed system using state-space approach is introduced. The simulation results using MATLAB/SIMULINK show that the performance of the wind turbine with the MPC approach is improved compared to the traditional PID controller in both low and high wind speeds.

  2. Development of an electrical model for a PV/battery system for performance prediction

    Energy Technology Data Exchange (ETDEWEB)

    Zahedi, A. [Monash Univ., Electrical and Computer Systems Engineering, Caulfield East, VIC (Australia)

    1998-09-01

    This paper presents electrical model of a photovoltaic-battery system. This model helps to understand the behaviour of a solar-battery system under various load and irradiance conditions. And also it assists with investigation of the performance of the system. Hybrid energy systems use different energy sources such as solar and wind with a backup unit such as a battery or a diesel generator. They are an economical option in areas remote from national grid. In this context the performance of the system to supply electric power in an efficient way of operation is important. The problem comes from uncertain renewable energy supply and load and also non-linear characteristics of components in the system. The purpose of study we are involved in this relation is to see the behaviour of a solar-battery system under various load and irradiance conditions and to investigate the performance of the system. As a result an optimum system configuration and a correct and cost effective size of Balance of System (BOS) can be achieved. In this paper the complete electrical circuit of the entire hybrid system, the mathematical model and computational technique are presented. (Author)

  3. Simplified predictive models for CO2 sequestration performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [Battelle Memorial Inst., Columbus, OH (United States); Ganesh, Priya [Battelle Memorial Inst., Columbus, OH (United States); Schuetter, Jared [Battelle Memorial Inst., Columbus, OH (United States); He, Jincong [Battelle Memorial Inst., Columbus, OH (United States); Jin, Zhaoyang [Battelle Memorial Inst., Columbus, OH (United States); Durlofsky, Louis J. [Battelle Memorial Inst., Columbus, OH (United States)

    2015-09-30

    Latin Hypercube sampling (LHS) based design with a multidimensional kriging metamodel fit. For roughly the same number of simulations, the LHS-based metamodel yields a more robust predictive model, as verified by a k-fold cross-validation approach (with data split into training and test sets) as well by validation with an independent dataset. In the third category, a reduced-order modeling procedure is utilized that combines proper orthogonal decomposition (POD) for reducing problem dimensionality with trajectory-piecewise linearization (TPWL) in order to represent system response at new control settings from a limited number of training runs. Significant savings in computational time are observed with reasonable accuracy from the PODTPWL reduced-order model for both vertical and horizontal well problems – which could be important in the context of history matching, uncertainty quantification and optimization problems. The simplified physics and statistical learning based models are also validated using an uncertainty analysis framework. Reference cumulative distribution functions of key model outcomes (i.e., plume radius and reservoir pressure buildup) generated using a 97-run full-physics simulation are successfully validated against the CDF from 10,000 sample probabilistic simulations using the simplified models. The main contribution of this research project is the development and validation of a portfolio of simplified modeling approaches that will enable rapid feasibility and risk assessment for CO2 sequestration in deep saline formations.

  4. Analytical model for predicting the performance of cross-flow thermoelectric liquid coolers

    International Nuclear Information System (INIS)

    Mathiprakasam, B.; Sutikno, T.

    1984-01-01

    An analytical model of a cross-flow thermoelectric liquid cooler was formulated, and its details are presented in this paper. The model accounts for the changes in hot and cold stream temperatures as they flow over the hot/cold junctions through the use of energy conservation equations. Accordingly, the temperatures of hot and cold junctions are positiondependent. Further, in this model, finite heat transfer coefficients between the junctions and the bulk fluid streams have also been incorporated. A closed-form solution of the resulting heat transfer equations was used to design a 350 W liquid cooler. The current flow and the electric power requirements to deliver the design cooling capacity were calculated using this solution. The effect of the area/length ratio of the thermoelectric elements, the mass flow rate, and the inlet temperatures of cold and hot streams, and also the heat transfer coefficients on the cold and hot sides on the cooler performance were also studied

  5. Chromosomal regions involved in hybrid performance and heterosis: their AFLP(R)-based identification and practical use in prediction models.

    Science.gov (United States)

    Vuylsteke, M; Kuiper, M; Stam, P

    2000-09-01

    In this paper, a novel approach towards the prediction of hybrid performance and heterosis is presented. Here, we describe an approach based on: (i) the assessment of associations between AFLP(R) markers and hybrid performance and specific combining ability (SCA) across a set of hybrids; and (ii) the assumption that the joint effect of genetic factors (loci) determined this way can be obtained by addition. Estimated gene effects for grain yield varied from additive, partial dominance to overdominance. This procedure was applied to 53 interheterotic hybrids out of a 13 by 13 half-diallel among maize inbreds, evaluated for grain yield. The hybrid value, representing the joint effect of the genetic factors, accounted for up to 62.4% of the variation in the hybrid performance observed, whereas the corresponding efficiency of the SCA model was 36.8%. Efficiency of the prediction for hybrid performance was evaluated by means of a cross-validation procedure for grain yield of (i) the 53 interheterotic hybrids and (ii) 16 hybrids partly related to the 13 by 13 half-diallel. Comparisons in prediction efficiency with the 'distance' model were made. Because the map position of the selected markers is known, putative quantitative trait loci (QTL) affecting grain yield, in terms of hybrid performance or heterosis, are identified. Some QTL of grain yield detected in the present study were located in the vicinity of loci reported earlier as having quantitative effects on grain yield.

  6. Performance prediction of PM 2.5 removal of real fibrous filters with a novel model considering rebound effect

    OpenAIRE

    Cai, Rong-Rong; Zhang, Li-Zhi; Yan, Yuying

    2017-01-01

    Fibrous filters have been proved to be one of the most cost-effective way of particulate matters (specifically PM 2.5) purification. However, due to the complex structure of real fibrous filters, it is difficult to accurately predict the performance of PM2.5 removal. In this study, a new 3D filtration modeling approach is proposed to predict the removal efficiencies of particles by real fibrous filters, by taking the particle rebound effect into consideration. A real filter is considered and ...

  7. Predicting quality of life after breast cancer surgery using ANN-based models: performance comparison with MR.

    Science.gov (United States)

    Tsai, Jinn-Tsong; Hou, Ming-Feng; Chen, Yao-Mei; Wan, Thomas T H; Kao, Hao-Yun; Shi, Hon-Yi

    2013-05-01

    The goal was to develop models for predicting long-term quality of life (QOL) after breast cancer surgery. Data were obtained from 203 breast cancer patients who completed the SF-36 health survey before and 2 years after surgery. Two of the models used to predict QOL after surgery were artificial neural networks (ANNs), which included one multilayer perceptron (MLP) network and one radial basis function (RBF) network. The third model was a multiple regression (MR) model. The criteria for evaluating the accuracy of the system models were mean square error (MSE) and mean absolute percentage error (MAPE). Compared to the MR model, the ANN-based models generally had smaller MSE values and smaller MAPE values in the test data set. One exception was the second year MSE for the test value. Most MAPE values for the ANN models ranged from 10 to 20 %. The one exception was the 6-month physical component summary score (PCS), which ranged from 23.19 to 26.86 %. Comparison of criteria for evaluating system performance showed that the ANN-based systems outperformed the MR system in terms of prediction accuracy. In both the MLP and RBF networks, surgical procedure type was the most sensitive parameter affecting PCS, and preoperative functional status was the most sensitive parameter affecting mental component summary score. The three systems can be combined to obtain a conservative prediction, and a combined approach is a potential supplemental tool for predicting long-term QOL after surgical treatment for breast cancer. Patients should also be advised that their postoperative QOL might depend not only on the success of their operations but also on their preoperative functional status.

  8. Performance of in-hospital mortality prediction models for acute hospitalization: Hospital Standardized Mortality Ratio in Japan

    Directory of Open Access Journals (Sweden)

    Motomura Noboru

    2008-11-01

    Full Text Available Abstract Objective In-hospital mortality is an important performance measure for quality improvement, although it requires proper risk adjustment. We set out to develop in-hospital mortality prediction models for acute hospitalization using a nation-wide electronic administrative record system in Japan. Methods Administrative records of 224,207 patients (patients discharged from 82 hospitals in Japan between July 1, 2002 and October 31, 2002 were randomly split into preliminary (179,156 records and test (45,051 records groups. Study variables included Major Diagnostic Category, age, gender, ambulance use, admission status, length of hospital stay, comorbidity, and in-hospital mortality. ICD-10 codes were converted to calculate comorbidity scores based on Quan's methodology. Multivariate logistic regression analysis was then performed using in-hospital mortality as a dependent variable. C-indexes were calculated across risk groups in order to evaluate model performances. Results In-hospital mortality rates were 2.68% and 2.76% for the preliminary and test datasets, respectively. C-index values were 0.869 for the model that excluded length of stay and 0.841 for the model that included length of stay. Conclusion Risk models developed in this study included a set of variables easily accessible from administrative data, and still successfully exhibited a high degree of prediction accuracy. These models can be used to estimate in-hospital mortality rates of various diagnoses and procedures.

  9. Prediction Models for Licensure Examination Performance using Data Mining Classifiers for Online Test and Decision Support System

    Directory of Open Access Journals (Sweden)

    Ivy M. Tarun

    2017-05-01

    Full Text Available This study focuse d on two main points: the generation of licensure examination performan ce prediction models; and the development of a Decision Support System. In this study, data mining classifiers were used to generate the models using WEKA (Waikato Environment for Knowledge Analysis. These models were integrated into the Decision Support System as default models to support decision making as far as appropriate interventions during review sessions are concerned. The system developed mainly involves the repeated generation of MR models for performance prediction and also provides a Mock Boar d Exam for the reviewees to take. From the models generated, it is established that the General Weighted Average of the reviewees in their General Education subjects, the result of the Mock Board Exam and the instance when the reviewee is conducting a sel f - review are good predictors of the licensure examination performance. Further , it is concluded that the General Weighted Average of the reviewees in their Major or Content courses is the best predictor of licensure examination performance. Based from the evaluation results of the system , the system satisfied its implied functions and is efficient, usable, reliable and portable. Hence, it can already be used not as a substitute to the face - to - face review sessions but to enhance the reviewees’ licensure exa mination review and allow initial identification of those who are likely to have difficulty in passing the licensure examination, therefore providing sufficient time and opportunities for appropriate interventions.

  10. Predictive validity of a three-dimensional model of performance anxiety in the context of tae-kwon-do.

    Science.gov (United States)

    Cheng, Wen-Nuan Kara; Hardy, Lew; Woodman, Tim

    2011-02-01

    We tested the predictive validity of the recently validated three-dimensional model of performance anxiety (Chang, Hardy, & Markland, 2009) with elite tae-kwon-do competitors (N = 99). This conceptual framework emphasized the adaptive potential of anxiety by including a regulatory dimension (reflected by perceived control) along with the intensity-oriented dimensions of cognitive and physiological anxiety. Anxiety was assessed 30 min before a competitive contest using the Three-Factor Anxiety Inventory. Competitors rated their performance on a tae-kwon-do-specific performance scale within 30 min after completion of their contest. Moderated hierarchical regression analyses revealed initial support for the predictive validity of the three-dimensional performance anxiety model. The regulatory dimension of anxiety (perceived control) revealed significant main and interactive effects on performance. This dimension appeared to be adaptive, as performance was better under high than low perceived control, and best vs. worst performance was associated with highest vs. lowest perceived control, respectively. Results are discussed in terms of the importance of the regulatory dimension of anxiety.

  11. Ages and transit times as important diagnostics of model performance for predicting carbon dynamics in terrestrial vegetation models

    Science.gov (United States)

    Ceballos-Núñez, Verónika; Richardson, Andrew D.; Sierra, Carlos A.

    2018-03-01

    The global carbon cycle is strongly controlled by the source/sink strength of vegetation as well as the capacity of terrestrial ecosystems to retain this carbon. These dynamics, as well as processes such as the mixing of old and newly fixed carbon, have been studied using ecosystem models, but different assumptions regarding the carbon allocation strategies and other model structures may result in highly divergent model predictions. We assessed the influence of three different carbon allocation schemes on the C cycling in vegetation. First, we described each model with a set of ordinary differential equations. Second, we used published measurements of ecosystem C compartments from the Harvard Forest Environmental Measurement Site to find suitable parameters for the different model structures. And third, we calculated C stocks, release fluxes, radiocarbon values (based on the bomb spike), ages, and transit times. We obtained model simulations in accordance with the available data, but the time series of C in foliage and wood need to be complemented with other ecosystem compartments in order to reduce the high parameter collinearity that we observed, and reduce model equifinality. Although the simulated C stocks in ecosystem compartments were similar, the different model structures resulted in very different predictions of age and transit time distributions. In particular, the inclusion of two storage compartments resulted in the prediction of a system mean age that was 12-20 years older than in the models with one or no storage compartments. The age of carbon in the wood compartment of this model was also distributed towards older ages, whereas fast cycling compartments had an age distribution that did not exceed 5 years. As expected, models with C distributed towards older ages also had longer transit times. These results suggest that ages and transit times, which can be indirectly measured using isotope tracers, serve as important diagnostics of model structure

  12. Ages and transit times as important diagnostics of model performance for predicting carbon dynamics in terrestrial vegetation models

    Directory of Open Access Journals (Sweden)

    V. Ceballos-Núñez

    2018-03-01

    Full Text Available The global carbon cycle is strongly controlled by the source/sink strength of vegetation as well as the capacity of terrestrial ecosystems to retain this carbon. These dynamics, as well as processes such as the mixing of old and newly fixed carbon, have been studied using ecosystem models, but different assumptions regarding the carbon allocation strategies and other model structures may result in highly divergent model predictions. We assessed the influence of three different carbon allocation schemes on the C cycling in vegetation. First, we described each model with a set of ordinary differential equations. Second, we used published measurements of ecosystem C compartments from the Harvard Forest Environmental Measurement Site to find suitable parameters for the different model structures. And third, we calculated C stocks, release fluxes, radiocarbon values (based on the bomb spike, ages, and transit times. We obtained model simulations in accordance with the available data, but the time series of C in foliage and wood need to be complemented with other ecosystem compartments in order to reduce the high parameter collinearity that we observed, and reduce model equifinality. Although the simulated C stocks in ecosystem compartments were similar, the different model structures resulted in very different predictions of age and transit time distributions. In particular, the inclusion of two storage compartments resulted in the prediction of a system mean age that was 12–20 years older than in the models with one or no storage compartments. The age of carbon in the wood compartment of this model was also distributed towards older ages, whereas fast cycling compartments had an age distribution that did not exceed 5 years. As expected, models with C distributed towards older ages also had longer transit times. These results suggest that ages and transit times, which can be indirectly measured using isotope tracers, serve as important

  13. A Universal Model to Predict Roadheaders' Cutting Performance / Uniwersalny Model Do Prognozowania Postępu Prac Kombajnów Do Drążenia Tuneli

    Science.gov (United States)

    Ebrahimabadi, Arash; Goshtasbi, Kamran; Shahriar, Kourosh; Seifabad, Masoud Cheraghi

    2012-12-01

    The paper intends to generate a universal model to predict the performance of roadheaders for all kinds of rock formations. In this regard, we first take into account the outcomes of previous attempts to explore the performance of roadheaders in Tabas Coal Mine project (the largest and fully mechanized coal mine in Iran). During those investigations, rock mass brittleness index (RMBI) was defined in order to relate the intact and rock mass characteristics to machine performance. The statistical analysis of data acquired from Tabas field demonstrated that RMBI was highly correlated to instantaneous cutting rate (ICR) of roadheaders (R² = 0.92). With the aim to construct a universal model for predicting the roadheader performance, we have now tried to establish a database consisting measured cutting rate of roadheaders as well as the data gathered from field studies of Tabas Coal Mine project and Besiktas, Kurucesme, Baltalimani, Eyup and Halic tunnels in Turkey. A broad modeling and analysis found a fair relationship, resulting in a new universal predictive model to predict the cutting rate of roadheaders with correlation of 0.73 (R² = 0.73). The application of local and universal models at Tabas Coal Mine showed a remarkable difference between measured and predicted ICR. The mean relative error of 0.359% was found with universal model but it represented lower value (mean relative error of 0.100%) while using local model. It can thus be concluded that instead of generating a universal model, separate localized models for different ground and machine conditions should be developed to improve the accuracy and reliability of the performance prediction models.

  14. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation......, then rival strategies can still be compared based on repeated bootstraps of the same data. Often, however, the overall performance of rival strategies is similar and it is thus difficult to decide for one model. Here, we investigate the variability of the prediction models that results when the same...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  15. Geochemical modelling for predicting the long-term performance of zeolite-PRB to treat lead contaminated groundwater

    Science.gov (United States)

    Obiri-Nyarko, Franklin; Kwiatkowska-Malina, Jolanta; Malina, Grzegorz; Kasela, Tomasz

    2015-06-01

    The feasibility of using geochemical modelling to predict the performance of a zeolite-permeable reactive barrier (PRB) for treating lead (Pb2 +) contaminated water was investigated in this study. A short-term laboratory column experiment was first performed with the zeolite (clinoptilolite) until the elution of 50 PV (1 PV = ca. 283 mL). Geochemical simulations of the one-dimensional transport of the Pb2+, considering removal processes including: ion-exchange, adsorption and complexation; the concomitant release of exchangeable cations (Ca2 +, Mg2 +, Na+, and K+) and the changes in pH were subsequently performed using the geochemical model PHREEQC. The results showed a reasonable agreement between the experimental results and the numerical simulations, with the exception of Ca2 + for which a great discrepancy was observed. The model also indicated the formation of secondary mineral precipitates such as goethite and hematite throughout the experiment, of which the effect on the hydraulic conductivity was found to be negligible. The results were further used to extrapolate the long-term performance of the zeolite. We found the capacity would be completely exhausted at PV = 250 (ca. 3 days). The study, thus, generally demonstrates the applicability of PHREEQC to predict the short and long-term performance of zeolite-PRBs. Therefore, it can be used to assist in the design and for management purposes of such barriers.

  16. Towards to a Predictive Model of Academic Performance Using Data Mining in the UTN - FRRe

    Directory of Open Access Journals (Sweden)

    David L. La Red Martínez

    2016-04-01

    Full Text Available Students completing the courses required to become an Engineer in Information Systems in the Resistencia Regional Fac-ulty, National Technological University, Argentine (UTN-FRRe, face the chal-lenge of attending classes and fulfilling course regularization requirements, often for correlative courses. Such is the case of freshmen's course Algorithms and Data Structures: it must be regularized in order to be able to attend several second and third year courses. Based on the results of the project entitled "Profiling of students and academic performance through the use of data mining", 25/L059 - UTI1719, implemented in the aforementioned course (in 2013-2015, a new project has started, aimed to take the descriptive analysis (what happened as a starting point, and use advanced analytics, trying to explain the why, the what will happen, and how we can address it. Different data mining tools will be used for the study: clustering, neural networks, Bayesian networks, decision trees, regression and time series, etc. These tools allow differ-ent results to be obtained from different perspectives, for the given problem. In this way, potential problematic situations will be detected at the beginning of courses, and necessary measures can be taken to solve them. Thereby, the aim of this projects is to identify students who are at risk of abandoning the race to give special support and avoid that situation. Decision trees as predictive classification technique is mainly used.

  17. Predicting Academic Performance

    OpenAIRE

    Marcos Gallacher

    2005-01-01

    This paper discussed advantages and disadvantages associated with the use of "admission tests" as predictors of performance in undergraduate studies programs. The paper analyzes performance of economics and business administration students. This performance is linked to admission tests results. The paper also analyzes aspects of performance related to (i) differential progress through time, and (ii) differences in the extent to which students have "areas of interest/ability". The paper conclu...

  18. The Ahuachapan geothermal field, El Salvador: Exploitation model, performance predictions, economic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ripperda, M.; Bodvarsson, G.S.; Lippmann, M.J.; Witherspoon, P.A.; Goranson, C.

    1991-05-01

    The Earth Sciences Division of Lawrence Berkeley Laboratory (LBL) is conducting a reservoir evaluation study of the Ahuachapan geothermal field in El Salvador. This work is being performed in cooperation with the Comision Ejecutiva Hidroelectrica del Rio Lempa (CEL) and the Los Alamos National Laboratory (LANL) with funding from the US Agency for International Development (USAID). This report describes the work done during the second year of the study (FY89--90). The first year's report included (1) the development of geological and conceptual models of the field, (2) the evaluation of the reservoir's initial thermodynamic and chemical conditions and their changes during exploitation, (3) the evaluation of interference test data and the observed reservoir pressure decline and (4) the development of a natural state model for the field. In the present report the results of reservoir engineering studies to evaluate different production-injection scenarios for the Ahuachapan geothermal field are discussed. The purpose of the work was to evaluate possible reservoir management options to enhance as well as to maintain the productivity of the field during a 30-year period (1990--2020). The ultimate objective was to determine the feasibility of increasing the electrical power output at Ahuachapan from the current level of about 50 MW{sub e} to the total installed capacity of 95 MW{sub e}. 20 refs., 75 figs., 10 tabs.

  19. An analytical model for prediction of two-phase (noncondensable) flow pump performance

    International Nuclear Information System (INIS)

    Furuya, O.

    1985-01-01

    During operational transients or a hypothetical LOCA (loss of coolant accident) condition, the recirculating coolant of PWR (pressurized water reactor) may flash into steam due to a loss of line pressure. Under such two-phase flow conditions, it is well known that the recirculation pump becomes unable to generate the same head as that of the single-phase flow case. Similar situations also exist in oil well submersible pumps where a fair amount of gas is contained in oil. Based on the one dimensional control volume method, an analytical method has been developed to determine the performance of pumps operating under two-phase flow conditions. The analytical method has incorporated pump geometry, void fraction, flow slippage and flow regime into the basic formula, but neglected the compressibility and condensation effects. During the course of model development, it has been found that the head degradation is mainly caused by higher acceleration on liquid phase and deceleration on gas phase than in the case of single-phase flows. The numerical results for head degradations and torques obtained with the model favorably compared with the air/water two-phase flow test data of Babcock and Wilcox (1/3 scale) and Creare (1/20 scale) pumps

  20. Artificial intelligence models for predicting the performance of biological wastewater treatment plant in the removal of Kjeldahl Nitrogen from wastewater

    Science.gov (United States)

    Manu, D. S.; Thalla, Arun Kumar

    2017-11-01

    The current work demonstrates the support vector machine (SVM) and adaptive neuro-fuzzy inference system (ANFIS) modeling to assess the removal efficiency of Kjeldahl Nitrogen of a full-scale aerobic biological wastewater treatment plant. The influent variables such as pH, chemical oxygen demand, total solids (TS), free ammonia, ammonia nitrogen and Kjeldahl Nitrogen are used as input variables during modeling. Model development focused on postulating an adaptive, functional, real-time and alternative approach for modeling the removal efficiency of Kjeldahl Nitrogen. The input variables used for modeling were daily time series data recorded at wastewater treatment plant (WWTP) located in Mangalore during the period June 2014-September 2014. The performance of ANFIS model developed using Gbell and trapezoidal membership functions (MFs) and SVM are assessed using different statistical indices like root mean square error, correlation coefficients (CC) and Nash Sutcliff error (NSE). The errors related to the prediction of effluent Kjeldahl Nitrogen concentration by the SVM modeling appeared to be reasonable when compared to that of ANFIS models with Gbell and trapezoidal MF. From the performance evaluation of the developed SVM model, it is observed that the approach is capable to define the inter-relationship between various wastewater quality variables and thus SVM can be potentially applied for evaluating the efficiency of aerobic biological processes in WWTP.

  1. Laser line scan performance prediction

    Science.gov (United States)

    Mahoney, Kevin L.; Schofield, Oscar; Kerfoot, John; Giddings, Tom; Shirron, Joe; Twardowski, Mike

    2007-09-01

    The effectiveness of sensors that use optical measurements for the laser detection and identification of subsurface mines is directly related to water clarity. The primary objective of the work presented here was to use the optical data collected by UUV (Slocum Glider) surveys of an operational areas to estimate the performance of an electro-optical identification (EOID) Laser Line Scan (LLS) system during RIMPAC 06, an international naval exercise off the coast of Hawaii. Measurements of optical backscattering and beam attenuation were made with a Wet Labs, Inc. Scattering Absorption Meter (SAM), mounted on a Rutgers University/Webb Research Slocum glider. The optical data universally indicated extremely clear water in the operational area, except very close to shore. The beam-c values from the SAM sensor were integrated to three attenuation lengths to provide an estimate of how well the LLS would perform in detecting and identifying mines in the operational areas. Additionally, the processed in situ optical data served as near-real-time input to the Electro-Optic Detection Simulator, ver. 3 (EODES-3; Metron, Inc.) model for EOID performance prediction. Both methods of predicting LLS performance suggested a high probability of detection and probability of identification. These predictions were validated by the actual performance of the LLS as the EOID system yielded imagery from which reliable mine identification could be made. Future plans include repeating this work in more optically challenging water types to demonstrate the utility of pre-mission UUV surveys of operational areas as a tactical decision aid for planning EOID missions.

  2. A Generic Model for Prediction of Separation Performance of Olefin/Paraffin Mixture by Glassy Polymer Membranes

    Directory of Open Access Journals (Sweden)

    A.A. Ghoreyshi

    2008-02-01

    Full Text Available The separation of olefin/paraffin mixtures is an important process in petrochemical industries, which is traditionally performed by low temperature distillation with a high-energy consumption, or complex extractive distillationand adsorption techniques. Membrane separation process is emerging as an alternative for traditional separation processes with respect to low energy and simple operation. Investigations made by various researchers on polymeric membranes it is found that special glassy polymers render them as suitable materials for olefin/paraffin mixture separation. In this regard, having some knowledge on the possible transport mechanism of these processes would play a significant role in their design and applications. In this study, separation behavior of olefin/paraffin mixtures through glassy polymers was modeled by three different approaches: the so-called dual transport model, the basic adsorption-diffusion theory and the general Maxwell-Stefan formulation. The systems chosen to validate the developed transport models are separation of ethane-ethylene mixture by 6FDA-6FpDA polyimide membrane and propane-propylene mixture by 6FDA-TrMPD polyimide membrane for which the individual sorption and permeation data are available in the literature. Acritical examination of dual transport model shows that this model fails clearly to predict even the proper trend for selectivities. The adjustment of pemeabilities by accounting for the contribution of non-selective bulk flow in the transport model introduced no improvement in the predictability of the model. The modeling results based on the basic adsorption-diffusion theory revealed that in this approach only using mixed permeability data, an acceptable result is attainable which fades out the advantages of predictibility of multicomponent separation performance from pure component data. Finally, the results obtained from the model developed based on Maxwell-Stefan formulation approach show a

  3. Performance of Multi Model Canonical Correlation Analysis (MMCCA) for prediction of Indian summer monsoon rainfall using GCMs output

    Science.gov (United States)

    Singh, Ankita; Acharya, Nachiketa; Mohanty, Uma Charan; Mishra, Gopbandhu

    2013-02-01

    The emerging advances in the field of dynamical prediction of monsoon using state-of-the-art General Circulation Models (GCMs) have led to the development of various multi model ensemble techniques (MMEs). In the present study, the concept of Canonical Correlation Analysis is used for making MME (referred as Multi Model Canonical Correlation Analysis or MMCCA) for the prediction of Indian summer monsoon rainfall (ISMR) during June-July-August-September (JJAS). This method has been employed on the rainfall outputs of six different GCMs for the period 1982 to 2008. The prediction skill of ISMR by MMCCA is compared with the simple composite method (SCM) (i.e. arithmetic mean of all GCMs), which is taken as a benchmark. After a rigorous analysis through different skill metrics such as correlation coefficient and index of agreement, the superiority of MMCCA over SCM is illustrated. Performance of both models is also evaluated during six typical monsoon years and the results indicate the potential of MMCCA over SCM in capturing the spatial pattern during extreme years.

  4. Assessment of performance and utility of mortality prediction models in a single Indian mixed tertiary intensive care unit.

    Science.gov (United States)

    Sathe, Prachee M; Bapat, Sharda N

    2014-01-01

    To assess the performance and utility of two mortality prediction models viz. Acute Physiology and Chronic Health Evaluation II (APACHE II) and Simplified Acute Physiology Score II (SAPS II) in a single Indian mixed tertiary intensive care unit (ICU). Secondary objectives were bench-marking and setting a base line for research. In this observational cohort, data needed for calculation of both scores were prospectively collected for all consecutive admissions to 28-bedded ICU in the year 2011. After excluding readmissions, discharges within 24 h and age predicted mortality had strong association with true mortality (R (2) = 0.98 for APACHE II and R (2) = 0.99 for SAPS II). Both models performed poorly in formal Hosmer-Lemeshow goodness-of-fit testing (Chi-square = 12.8 (P = 0.03) for APACHE II, Chi-square = 26.6 (P = 0.001) for SAPS II) but showed good discrimination (area under receiver operating characteristic curve 0.86 ± 0.013 SE (P care and comparing performances of different units without customization. Considering comparable performance and simplicity of use, efforts should be made to adapt SAPS II.

  5. Validating the Performance of the Modified Early Obstetric Warning System Multivariable Model to Predict Maternal Intensive Care Unit Admission.

    Science.gov (United States)

    Ryan, Helen M; Jones, Meghan A; Payne, Beth A; Sharma, Sumedha; Hutfield, Anna M; Lee, Tang; Ukah, U Vivian; Walley, Keith R; Magee, Laura A; von Dadelszen, Peter

    2017-09-01

    To evaluate the performance of the Modified Early Obstetric Warning System (MEOWS) to predict maternal ICU admission in an obstetric population. Case-control study. Two maternity units in Vancouver, Canada, one with ICU facilities, between January 1, 2000, and December 31, 2011. Pregnant or recently delivered (≤6 weeks) women admitted to the hospital for >24 hours. Three control patients were randomly selected per case and matched for year of admission. Retrospective, observational, case-control validation study investigating the physiologic predictors of admission in the 24-hour period preceding either ICU admission >24 hours (cases) or following admission (control patients). Model performance was assessed based on sensitivity, specificity, and predictive values. Forty-six women were admitted to the ICU for >24 hours (0.51/1000 deliveries); the study included 138 randomly selected control patients. There were no maternal deaths in the cohort. MEOWS had high sensitivity (0.96) but low specificity (0.54) for ICU admission >24 hours, whereas ≥1 one red trigger maintained sensitivity (0.96) and improved specificity (0.73). Altering MEOWS trigger parameters may improve the accuracy of MEOWS in predicting ICU admission. Formal modelling of a MEOWS scoring system is required to support evidence-based care. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Predictive performance for population models using stochastic differential equations applied on data from an oral glucose tolerance test.

    Science.gov (United States)

    Møller, Jonas B; Overgaard, Rune V; Madsen, Henrik; Hansen, Torben; Pedersen, Oluf; Ingwersen, Steen H

    2010-02-01

    Several articles have investigated stochastic differential equations (SDEs) in PK/PD models, but few have quantitatively investigated the benefits to predictive performance of models based on real data. Estimation of first phase insulin secretion which reflects beta-cell function using models of the OGTT is a difficult problem in need of further investigation. The present work aimed at investigating the power of SDEs to predict the first phase insulin secretion (AIR (0-8)) in the IVGTT based on parameters obtained from the minimal model of the OGTT, published by Breda et al. (Diabetes 50(1):150-158, 2001). In total 174 subjects underwent both an OGTT and a tolbutamide modified IVGTT. Estimation of parameters in the oral minimal model (OMM) was performed using the FOCE-method in NONMEM VI on insulin and C-peptide measurements. The suggested SDE models were based on a continuous AR(1) process, i.e. the Ornstein-Uhlenbeck process, and the extended Kalman filter was implemented in order to estimate the parameters of the models. Inclusion of the Ornstein-Uhlenbeck (OU) process caused improved description of the variation in the data as measured by the autocorrelation function (ACF) of one-step prediction errors. A main result was that application of SDE models improved the correlation between the individual first phase indexes obtained from OGTT and AIR (0-8) (r = 0.36 to r = 0.49 and r = 0.32 to r = 0.47 with C-peptide and insulin measurements, respectively). In addition to the increased correlation also the properties of the indexes obtained using the SDE models more correctly assessed the properties of the first phase indexes obtained from the IVGTT. In general it is concluded that the presented SDE approach not only caused autocorrelation of errors to decrease but also improved estimation of clinical measures obtained from the glucose tolerance tests. Since, the estimation time of extended models was not heavily increased compared to basic models, the applied method

  7. South African mid-summer seasonal rainfall prediction performance by a coupled ocean-atmosphere model

    CSIR Research Space (South Africa)

    Landman, WA

    2011-01-01

    Full Text Available . 2000; Goddard and Mason, 2002). Such a so-called two-tiered procedure to predict the outcome of the rainfall season has been employed in South Africa for a number of years already (e.g., Landman et al., 2001). The advent of fully coupled ocean...

  8. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    refining formal, inductive predictive models is the quality of the archaeological and environmental data. To build models efficiently, relevant...geomorphology, and historic information . Lessons Learned: The original model was focused on the identification of prehistoric resources. This...system but uses predictive modeling informally . For example, there is no probability for buried archaeological deposits on the Burton Mesa, but there is

  9. Comparison of Predictive Models for Photovoltaic Module Performance under Sudanese-Sahelian Climate

    Directory of Open Access Journals (Sweden)

    Njomo Donatien

    2012-06-01

    Full Text Available This paper investigates various approaches to the modeling of photovoltaic systems and tests their accuracy under tropical climate. Particularly the single diode model is used to estimate the electrical behavior of the cell with respect changes on environmental parameter of temperature and irradiance. A particular typical MXS60 solar panel is used for models evaluation and results are comparing with points taken directly from the experience made on the same panel in tropical climate of the Sudan type . The accuracy of models was computed and the better model was determined for local conditions. The analysis of the curves shows that the single diode model has the better accuracy whereas the Photovoltaic geographical information system (PVGIS approach seems to be not appropriate for the region.

  10. A RANS modelling approach for predicting powering performance of ships in waves

    Directory of Open Access Journals (Sweden)

    Björn Windén

    2014-06-01

    Full Text Available In this paper, a modelling technique for simulating self-propelled ships in waves is presented. The flow is modelled using a RANS solver coupled with an actuator disk model for the propeller. The motion of the ship is taken into consideration in the definition of the actuator disk region as well as the advance ratio of the propeller. The RPM of the propeller is controlled using a PID-controller with constraints added on the maximum permissible RPM increase rate. Results are presented for a freely surging model in regular waves with different constraints put on the PID-controller. The described method shows promising results and allows for the studying of several factors relating to self-propulsion. However, more validation data is needed to judge the accuracy of the model.

  11. Predicting emergency diesel starting performance

    International Nuclear Information System (INIS)

    DeBey, T.M.

    1989-01-01

    The US Department of Energy effort to extend the operational lives of commercial nuclear power plants has examined methods for predicting the performance of specific equipment. This effort focuses on performance prediction as a means for reducing equipment surveillance, maintenance, and outages. Realizing these goals will result in nuclear plants that are more reliable, have lower maintenance costs, and have longer lives. This paper describes a monitoring system that has been developed to predict starting performance in emergency diesels. A prototype system has been built and tested on an engine at Sandia National Laboratories. 2 refs

  12. Improved prediction models for PCC pavement performance-related specifications, volume I : final report.

    Science.gov (United States)

    2000-12-01

    The current performance-related specifications (PRS) methodology has been under development by the Federal : Highway Administration (FHWA) for several years and has now reached a level at which it can be implemented by : State highway agencies. PRS f...

  13. Mechanistic-Empirical Pavement Design Guide Flexible Pavement Performance Prediction Models Volume I Executive Research Summary

    Science.gov (United States)

    2007-08-01

    The objective of this research study was to develop performance characteristics or variables (e.g., ride quality, rutting, : fatigue cracking, transverse cracking) of flexible pavements in Montana, and to use these characteristics in the : implementa...

  14. Mechanistic-Empirical Pavement Design Guide Flexible Pavement Performance Prediction Models Volume II Reference Manual

    Science.gov (United States)

    2007-08-01

    The objective of this research study was to develop performance characteristics or variables (e.g., ride quality, rutting, : fatigue cracking, transverse cracking) of flexible pavements in Montana, and to use these characteristics in the : implementa...

  15. An analytic model for predicting the performance of distributed applications on multicore clusters

    CSIR Research Space (South Africa)

    Khanyile, NP

    2012-08-01

    Full Text Available up [22]. Bandwidth may be increased to improve performance of a certain system and compensate for the propagation delay. However, increasing the bandwidth does not automatically guarantee performance gain. In order to benefit from high bandwidth... the results to a 4 GB file. The application has minimal interprocessor communication. Only the file names are broadcasted. Fig. 9. Halo exchange between three processes The algorithm [24] proposes three ways of dealing with boundary pixels: files locking...

  16. Use of structure-activity landscape index curves and curve integrals to evaluate the performance of multiple machine learning prediction models

    Directory of Open Access Journals (Sweden)

    LeDonne Norman C

    2011-02-01

    Full Text Available Abstract Background Standard approaches to address the performance of predictive models that used common statistical measurements for the entire data set provide an overview of the average performance of the models across the entire predictive space, but give little insight into applicability of the model across the prediction space. Guha and Van Drie recently proposed the use of structure-activity landscape index (SALI curves via the SALI curve integral (SCI as a means to map the predictive power of computational models within the predictive space. This approach evaluates model performance by assessing the accuracy of pairwise predictions, comparing compound pairs in a manner similar to that done by medicinal chemists. Results The SALI approach was used to evaluate the performance of continuous prediction models for MDR1-MDCK in vitro efflux potential. Efflux models were built with ADMET Predictor neural net, support vector machine, kernel partial least squares, and multiple linear regression engines, as well as SIMCA-P+ partial least squares, and random forest from Pipeline Pilot as implemented by AstraZeneca, using molecular descriptors from SimulationsPlus and AstraZeneca. Conclusion The results indicate that the choice of training sets used to build the prediction models is of great importance in the resulting model quality and that the SCI values calculated for these models were very similar to their Kendall τ values, leading to our suggestion of an approach to use this SALI/SCI paradigm to evaluate predictive model performance that will allow more informed decisions regarding model utility. The use of SALI graphs and curves provides an additional level of quality assessment for predictive models.

  17. Use of structure-activity landscape index curves and curve integrals to evaluate the performance of multiple machine learning prediction models.

    Science.gov (United States)

    Ledonne, Norman C; Rissolo, Kevin; Bulgarelli, James; Tini, Leonard

    2011-02-07

    Standard approaches to address the performance of predictive models that used common statistical measurements for the entire data set provide an overview of the average performance of the models across the entire predictive space, but give little insight into applicability of the model across the prediction space. Guha and Van Drie recently proposed the use of structure-activity landscape index (SALI) curves via the SALI curve integral (SCI) as a means to map the predictive power of computational models within the predictive space. This approach evaluates model performance by assessing the accuracy of pairwise predictions, comparing compound pairs in a manner similar to that done by medicinal chemists. The SALI approach was used to evaluate the performance of continuous prediction models for MDR1-MDCK in vitro efflux potential. Efflux models were built with ADMET Predictor neural net, support vector machine, kernel partial least squares, and multiple linear regression engines, as well as SIMCA-P+ partial least squares, and random forest from Pipeline Pilot as implemented by AstraZeneca, using molecular descriptors from SimulationsPlus and AstraZeneca. The results indicate that the choice of training sets used to build the prediction models is of great importance in the resulting model quality and that the SCI values calculated for these models were very similar to their Kendall τ values, leading to our suggestion of an approach to use this SALI/SCI paradigm to evaluate predictive model performance that will allow more informed decisions regarding model utility. The use of SALI graphs and curves provides an additional level of quality assessment for predictive models.

  18. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  19. SIMPLIFIED PREDICTIVE MODELS FOR CO₂ SEQUESTRATION PERFORMANCE ASSESSMENT RESEARCH TOPICAL REPORT ON TASK #3 STATISTICAL LEARNING BASED MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta; Schuetter, Jared

    2014-11-01

    We compare two approaches for building a statistical proxy model (metamodel) for CO₂ geologic sequestration from the results of full-physics compositional simulations. The first approach involves a classical Box-Behnken or Augmented Pairs experimental design with a quadratic polynomial response surface. The second approach used a space-filling maxmin Latin Hypercube sampling or maximum entropy design with the choice of five different meta-modeling techniques: quadratic polynomial, kriging with constant and quadratic trend terms, multivariate adaptive regression spline (MARS) and additivity and variance stabilization (AVAS). Simulations results for CO₂ injection into a reservoir-caprock system with 9 design variables (and 97 samples) were used to generate the data for developing the proxy models. The fitted models were validated with using an independent data set and a cross-validation approach for three different performance metrics: total storage efficiency, CO₂ plume radius and average reservoir pressure. The Box-Behnken–quadratic polynomial metamodel performed the best, followed closely by the maximin LHS–kriging metamodel.

  20. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... the performance of HIRLAM in particular with respect to wind predictions. To estimate the performance of the model two spatial resolutions (0,5 Deg. and 0.2 Deg.) and different sets of HIRLAM variables were used to predict wind speed and energy production. The predictions of energy production for the wind farms...... are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production...

  1. A model-based analysis of the predictive performance of different renal function markers for cefepime clearance in the ICU.

    Science.gov (United States)

    Jonckheere, Stijn; De Neve, Nikolaas; De Beenhouwer, Hans; Berth, Mario; Vermeulen, An; Van Bocxlaer, Jan; Colin, Pieter

    2016-09-01

    Several population pharmacokinetic models for cefepime in critically ill patients have been described, which all indicate that variability in renal clearance is the main determinant of the observed variability in exposure. The main objective of this study was to determine which renal marker best predicts cefepime clearance. A pharmacokinetic model was developed using NONMEM based on 208 plasma and 51 urine samples from 20 ICU patients during a median follow-up of 3 days. Four serum-based kidney markers (creatinine, cystatin C, urea and uromodulin) and two urinary markers [measured creatinine clearance (CLCR) and kidney injury molecule-1] were evaluated as covariates in the model. A two-compartment model incorporating a renal and non-renal clearance component along with an additional term describing haemodialysis clearance provided an adequate description of the data. The Cockcroft-Gault formula was the best predictor for renal cefepime clearance. Compared with the base model without covariates, the objective function value decreased from 1971.7 to 1948.1, the median absolute prediction error from 42.4% to 29.9% and the between-subject variability in renal cefepime clearance from 135% to 50%. Other creatinine- and cystatin C-based formulae and measured CLCR performed similarly. Monte Carlo simulations using the Sanford guide dose recommendations indicated an insufficient dose reduction in patients with a decreased kidney function, leading to potentially toxic levels. The Cockcroft-Gault formula was the best predictor for cefepime clearance in critically ill patients, although other creatinine- and cystatin C-based formulae and measured CLCR performed similarly. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. A Performance Prediction Method for Pumps as Turbines (PAT Using a Computational Fluid Dynamics (CFD Modeling Approach

    Directory of Open Access Journals (Sweden)

    Emma Frosina

    2017-01-01

    Full Text Available Small and micro hydropower systems represent an attractive solution for generating electricity at low cost and with low environmental impact. The pump-as-turbine (PAT approach has promise in this application due to its low purchase and maintenance costs. In this paper, a new method to predict the inverse characteristic of industrial centrifugal pumps is presented. This method is based on results of simulations performed with commercial three-dimensional Computational Fluid Dynamics (CFD software. Model results have been first validated in pumping mode using data supplied by pump manufacturers. Then, the results have been compared to experimental data for a pump running in reverse. Experimentation has been performed on a dedicated test bench installed in the Department of Civil Construction and Environmental Engineering of the University of Naples Federico II. Three different pumps, with different specific speeds, have been analyzed. Using the model results, the inverse characteristic and the best efficiency point have been evaluated. Finally, results have been compared to prediction methods available in the literature.

  3. Development of a CSP plant energy yield calculation tool applying predictive models to analyze plant performance sensitivities

    Science.gov (United States)

    Haack, Lukas; Peniche, Ricardo; Sommer, Lutz; Kather, Alfons

    2017-06-01

    At early project stages, the main CSP plant design parameters such as turbine capacity, solar field size, and thermal storage capacity are varied during the techno-economic optimization to determine most suitable plant configurations. In general, a typical meteorological year with at least hourly time resolution is used to analyze each plant configuration. Different software tools are available to simulate the annual energy yield. Software tools offering a thermodynamic modeling approach of the power block and the CSP thermal cycle, such as EBSILONProfessional®, allow a flexible definition of plant topologies. In EBSILON, the thermodynamic equilibrium for each time step is calculated iteratively (quasi steady state), which requires approximately 45 minutes to process one year with hourly time resolution. For better presentation of gradients, 10 min time resolution is recommended, which increases processing time by a factor of 5. Therefore, analyzing a large number of plant sensitivities, as required during the techno-economic optimization procedure, the detailed thermodynamic simulation approach becomes impracticable. Suntrace has developed an in-house CSP-Simulation tool (CSPsim), based on EBSILON and applying predictive models, to approximate the CSP plant performance for central receiver and parabolic trough technology. CSPsim significantly increases the speed of energy yield calculations by factor ≥ 35 and has automated the simulation run of all predefined design configurations in sequential order during the optimization procedure. To develop the predictive models, multiple linear regression techniques and Design of Experiment methods are applied. The annual energy yield and derived LCOE calculated by the predictive model deviates less than ±1.5 % from the thermodynamic simulation in EBSILON and effectively identifies the optimal range of main design parameters for further, more specific analysis.

  4. Saccharomyces cerevisiae and S. kudriavzevii Synthetic Wine Fermentation Performance Dissected by Predictive Modeling

    Directory of Open Access Journals (Sweden)

    David Henriques

    2018-02-01

    Full Text Available Wineries face unprecedented challenges due to new market demands and climate change effects on wine quality. New yeast starters including non-conventional Saccharomyces species, such as S. kudriavzevii, may contribute to deal with some of these challenges. The design of new fermentations using non-conventional yeasts requires an improved understanding of the physiology and metabolism of these cells. Dynamic modeling brings the potential of exploring the most relevant mechanisms and designing optimal processes more systematically. In this work we explore mechanisms by means of a model selection, reduction and cross-validation pipeline which enables to dissect the most relevant fermentation features for the species under consideration, Saccharomyces cerevisiae T73 and Saccharomyces kudriavzevii CR85. The pipeline involved the comparison of a collection of models which incorporate several alternative mechanisms with emphasis on the inhibitory effects due to temperature and ethanol. We focused on defining a minimal model with the minimum number of parameters, to maximize the identifiability and the quality of cross-validation. The selected model was then used to highlight differences in behavior between species. The analysis of model parameters would indicate that the specific growth rate and the transport of hexoses at initial times are higher for S. cervisiae T73 while S. kudriavzevii CR85 diverts more flux for glycerol production and cellular maintenance. As a result, the fermentations with S. kudriavzevii CR85 are typically slower; produce less ethanol but higher glycerol. Finally, we also explored optimal initial inoculation and process temperature to find the best compromise between final product characteristics and fermentation duration. Results reveal that the production of glycerol is distinctive in S. kudriavzevii CR85, it was not possible to achieve the same production of glycerol with S. cervisiae T73 in any of the conditions tested

  5. Prediction models for performance and emissions of a dual fuel CI ...

    Indian Academy of Sciences (India)

    Dual fuel engines are being used these days to overcome shortage of fossil fuels and fulfill stringent exhaust gas emission regulations. They have several advantages over conventional diesel engines. In this context, this paper makes use of experimental results obtained from a dual fuel engine for developing models to ...

  6. Comparing the performance of 11 crop simulation models in predicting yield response to nitrogen fertilization

    DEFF Research Database (Denmark)

    Salo, T J; Palosuo, T; Kersebaum, K C

    2016-01-01

    Eleven widely used crop simulation models (APSIM, CERES, CROPSYST, COUP, DAISY, EPIC, FASSET, HERMES, MONICA, STICS and WOFOST) were tested using spring barley (Hordeum vulgare L.) data set under varying nitrogen (N) fertilizer rates from three experimental years in the boreal climate of Jokioinen...

  7. A Bayesian model for predicting face recognition performance using image quality

    NARCIS (Netherlands)

    Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2014-01-01

    Quality of a pair of facial images is a strong indicator of the uncertainty in decision about identity based on that image pair. In this paper, we describe a Bayesian approach to model the relation between image quality (like pose, illumination, noise, sharpness, etc) and corresponding face

  8. Performance of Lynch syndrome predictive models in quantifying the likelihood of germline mutations in patients with abnormal MLH1 immunoexpression.

    Science.gov (United States)

    Cabreira, Verónica; Pinto, Carla; Pinheiro, Manuela; Lopes, Paula; Peixoto, Ana; Santos, Catarina; Veiga, Isabel; Rocha, Patrícia; Pinto, Pedro; Henrique, Rui; Teixeira, Manuel R

    2017-01-01

    Lynch syndrome (LS) accounts for up to 4 % of all colorectal cancers (CRC). Detection of a pathogenic germline mutation in one of the mismatch repair genes is the definitive criterion for LS diagnosis, but it is time-consuming and expensive. Immunohistochemistry is the most sensitive prescreening test and its predictive value is very high for loss of expression of MSH2, MSH6, and (isolated) PMS2, but not for MLH1. We evaluated if LS predictive models have a role to improve the molecular testing algorithm in this specific setting by studying 38 individuals referred for molecular testing and who were subsequently shown to have loss of MLH1 immunoexpression in their tumors. For each proband we calculated a risk score, which represents the probability that the patient with CRC carries a pathogenic MLH1 germline mutation, using the PREMM 1,2,6 and MMRpro predictive models. Of the 38 individuals, 18.4 % had a pathogenic MLH1 germline mutation. MMRpro performed better for the purpose of this study, presenting a AUC of 0.83 (95 % CI 0.67-0.9; P < 0.001) compared with a AUC of 0.68 (95 % CI 0.51-0.82, P = 0.09) for PREMM 1,2,6 . Considering a threshold of 5 %, MMRpro would eliminate unnecessary germline mutation analysis in a significant proportion of cases while keeping very high sensitivity. We conclude that MMRpro is useful to correctly predict who should be screened for a germline MLH1 gene mutation and propose an algorithm to improve the cost-effectiveness of LS diagnosis.

  9. Create full-scale predictive economic models on ROI and innovation with performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Joseph, Earl C. [IDC Research, Inc., Framingham, MA (United States); Conway, Steve [IDC Research, Inc., Framingham, MA (United States)

    2017-10-27

    The U.S. Department of Energy (DOE), the world's largest buyer and user of supercomputers, awarded IDC Research, Inc. a grant to create two macroeconomic models capable of quantifying, respectively, financial and non-financial (innovation) returns on investments in HPC resources. Following a 2013 pilot study in which we created the models and tested them on about 200 real-world HPC cases, DOE authorized us to conduct a full-out, three-year grant study to collect and measure many more examples, a process that would also subject the methodology to further testing and validation. A secondary, "stretch" goal of the full-out study was to advance the methodology from association toward (but not all the way to) causation, by eliminating the effects of some of the other factors that might be contributing, along with HPC investments, to the returns produced in the investigated projects.

  10. Paint Pavement Marking Performance Prediction Model That Includes the Impacts of Snow Removal Operations

    Science.gov (United States)

    2011-03-01

    Hypothesized that snow plows wear down mountain road pavement markings. 2007 Craig et al. -Edge lines degrade slower than center/skip lines 2007...retroreflectivity to create the models. They discovered that paint pavement markings last 80% longer on Portland Cement Concrete than Asphalt Concrete at low AADT...retroreflectivity, while yellow markings lost 21%. Lu and Barter attributed the sizable degradation to snow removal, sand application, and studded

  11. PREDICTING PERFORMANCE OF WEB SERVICES USING SMTQA

    OpenAIRE

    Ch Ram Mohan Reddy; D Evangelin Geetha; KG Srinivasa; T V Suresh Kumar; K Rajani Kanth

    2011-01-01

    Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified Modeling Language, Use Case Diagram, Sequence Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Multi-Tie...

  12. Early Performance Prediction of Web Services

    OpenAIRE

    Reddy, Ch Ram Mohan; Geetha, D. Evangelin; Srinivasa, K. G.; Kumar, T. V. Suresh; Kanth, K. Rajani

    2012-01-01

    Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified Modeling Language, Use Case Diagram, Sequence Diagram, Deployment Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Mult...

  13. Performance of a New Model for Predicting End of Flowering Date (bbch 69) of Grapevine (Vitis Vinifera L.)

    Science.gov (United States)

    Gentilucci, Matteo

    2017-04-01

    The end of flowering date (BBCH 69) is an important phenological stage for grapevine (Vitis Vinifera L.), in fact up to this date the growth is focused on the plant and gradually passes on the berries through fruit set. The aim of this study is to perform a model to predict the date of the end of flowering (BBCH69) for some grapevine varieties. This research carried out using three cultivars of grapevine (Maceratino, Montepulciano, Sangiovese) in three different locations (Macerata, Morrovalle and Potenza Picena), places of an equal number of wine farms for the time interval between 2006 and 2013. In order to have reliable temperatures for each location, the data of 6 weather stations near these farms have been interpolated using cokriging methods with elevation as independent variable. The procedure to predict the end of flowering date starts with an investigation of cardinal temperatures typical of each grapevine cultivar. In fact the analysis is characterized by four temperature thresholds (cardinals): minimum activity temperature (TCmin = below this temperature there is no growth for the plant), lower optimal temperature (TLopt = above this temperature there is maximum growth), upper optimal temperature (TUopt = below this temperature there is maximum growth) and maximum activity temperature (TC max = above this temperature there is no growth). Thus this model take into consideration maximum, mean and minimum daily temperatures of each location, relating them with the four above mentioned cultivar temperature thresholds. In this way it has been obtained some possible cases (32) corresponding to as many equations, depending on the position of temperatures compared with the thresholds, in order to calculate the amount of growing degree units (GDU) for each day. Several iterative tests (about 1000 for each cultivar) have been performed, changing the values of temperature thresholds and GDU in order to find the best possible combination which minimizes error

  14. Performance of Various Models in Predicting Vital Capacity Changes Caused by Breathing High Oxygen Partial Pressures

    Science.gov (United States)

    2007-10-01

    average data is moderately high at 0.43. 70 >0 8A m m I I " IO~~C C? nIf _ ~ ~ c > , i𔃺 I " g i0 i0 04 (WIB CO (U13 CO 0 00 0 0 I oBueqo % coo 0 0 C...B e o LU o w 0. 0 0I L CD 0 74 00 CDE 00 00 0-0 I 0) 00 o 0 0 0 C6 CI (D a 0. 0E c\\l E * 0- ’U.)C C .. 0* ** C\\l Cf GOB40L% cmJ 75 Model 9. Delayed

  15. Prediction of hybrid performance in maize with a ridge regression model employed to DNA markers and mRNA transcription profiles.

    Science.gov (United States)

    Zenke-Philippi, Carola; Thiemann, Alexander; Seifert, Felix; Schrag, Tobias; Melchinger, Albrecht E; Scholten, Stefan; Frisch, Matthias

    2016-03-29

    Ridge regression models can be used for predicting heterosis and hybrid performance. Their application to mRNA transcription profiles has not yet been investigated. Our objective was to compare the prediction accuracy of models employing mRNA transcription profiles with that of models employing genome-wide markers using a data set of 98 maize hybrids from a breeding program. We predicted hybrid performance and mid-parent heterosis for grain yield and grain dry matter content and employed cross validation to assess the prediction accuracy. Prediction with a ridge regression model using random effects for mRNA transcription profiles resulted in similar prediction accuracies than employing the model to DNA markers. For hybrids, of which none of the parental inbred lines was part of the training set, the ridge regression model did not reach the prediction accuracy that was obtained with a model using transcriptome-based distances. We conclude that mRNA transcription profiles are a promising alternative to DNA markers for hybrid prediction, but further studies with larger data sets are required to investigate the superiority of alternative prediction models.

  16. The extent to which a model of motivated learning best predicts the academic performance of college students majoring in science

    Science.gov (United States)

    Sharman, Sandra Jeanne

    This study investigated three factors in the Corno and Mandinach (1983) model of motivated learning that account for the academic performance of honors and traditionally placed students majoring in biological sciences as represented by their college grade point averages. The following research questions guided the study: (a) Which academic factors or combination of academic factors from the Corno and Mandinach (1983) model correlate with the college grade point averages of honors and traditionally placed students majoring in biological science? (b) Which academic factor or combination of academic factors from the Corno and Mandinach (1983) model best predict college grade point averages for honors and traditionally placed college students majoring in biological science? and (c) Are there significant differences between the perceived nonacademic factors accounting for the success of honors and traditionally placed students majoring in biological science? Scholastic Attitude Test scores and cumulative grade point averages for the participants are collected and evaluated to ascertain a past record of academic capability and a present record of students' academic performance. Four self-report measures were used to assess students' cognitive and nonacademic traits. Correlations, descriptive statistics, regression analyses, and chi-square statistics were generated and estimations of accounted variance (rsp2) indicated how the variables evaluated in the study contributed to college students' academic performance. The correlation analysis indicated that none of the factors under investigation significantly correlate with the college grade point averages of the two groups of students. The regression analyses indicated that no factor or combination of factors significantly predicted college grade point average for the two groups of students. There was no significance even when the subscales were collapsed into separate categories. According to the chi-square statistics

  17. [Development and Application of a Performance Prediction Model for Home Care Nursing Based on a Balanced Scorecard using the Bayesian Belief Network].

    Science.gov (United States)

    Noh, Wonjung; Seomun, Gyeongae

    2015-06-01

    This study was conducted to develop key performance indicators (KPIs) for home care nursing (HCN) based on a balanced scorecard, and to construct a performance prediction model of strategic objectives using the Bayesian Belief Network (BBN). This methodological study included four steps: establishment of KPIs, performance prediction modeling, development of a performance prediction model using BBN, and simulation of a suggested nursing management strategy. An HCN expert group and a staff group participated. The content validity index was analyzed using STATA 13.0, and BBN was analyzed using HUGIN 8.0. We generated a list of KPIs composed of 4 perspectives, 10 strategic objectives, and 31 KPIs. In the validity test of the performance prediction model, the factor with the greatest variance for increasing profit was maximum cost reduction of HCN services. The factor with the smallest variance for increasing profit was a minimum image improvement for HCN. During sensitivity analysis, the probability of the expert group did not affect the sensitivity. Furthermore, simulation of a 10% image improvement predicted the most effective way to increase profit. KPIs of HCN can estimate financial and non-financial performance. The performance prediction model for HCN will be useful to improve performance.

  18. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  19. Integrating Genomics with Nutrition Models to Improve the Prediction of Cattle Performance and Carcass Composition under Feedlot Conditions

    Science.gov (United States)

    Tedeschi, Luis O.

    2015-01-01

    Cattle body composition is difficult to model because several factors affect the composition of the average daily gain (ADG) of growing animals. The objective of this study was to identify commercial single nucleotide polymorphism (SNP) panels that could improve the predictability of days on feed (DOF) to reach a target United States Department of Agriculture (USDA) grade given animal, diet, and environmental information under feedyard conditions. The data for this study was comprised of crossbred heifers (n = 681) and steers (n = 836) from commercial feedyards. Eleven molecular breeding value (MBV) scores derived from SNP panels of candidate gene polymorphisms and two-leptin gene SNP (UASMS2 and E2FB) were evaluated. The empty body fat (EBF) and the shrunk body weight (SBW) at 28% EBF (AFSBW) were computed by the Cattle Value Discovery System (CVDS) model using hip height (EBFHH and AFSBWHH) or carcass traits (EBFCT and AFSBWCT) of the animals. The DOFHH was calculated when AFSBWHH and ADGHH were used and DOFCT was calculated when AFSBWCT and ADGCT were used. The CVDS estimates dry matter required (DMR) by individuals fed in groups when observed ADG and AFSBW are provided. The AFSBWCT was assumed more accurate than the AFSBWHH because it was computed using carcass traits. The difference between AFSBWCT and AFSBWHH, DOFCT and DOFHH, and DMR and dry matter intake (DMI) were regressed on the MBV scores and leptin gene SNP to explain the variation. Our results indicate quite a large range of correlations among MBV scores and model input and output variables, but MBV ribeye area was the most strongly correlated with the differences in DOF, AFSBW, and DMI by explaining 8, 13.2 and 6.5%, respectively, of the variation. This suggests that specific MBV scores might explain additional variation of input and output variables used by nutritional models in predicting individual animal performance. PMID:26599759

  20. Integrating Genomics with Nutrition Models to Improve the Prediction of Cattle Performance and Carcass Composition under Feedlot Conditions.

    Directory of Open Access Journals (Sweden)

    Luis O Tedeschi

    Full Text Available Cattle body composition is difficult to model because several factors affect the composition of the average daily gain (ADG of growing animals. The objective of this study was to identify commercial single nucleotide polymorphism (SNP panels that could improve the predictability of days on feed (DOF to reach a target United States Department of Agriculture (USDA grade given animal, diet, and environmental information under feedyard conditions. The data for this study was comprised of crossbred heifers (n = 681 and steers (n = 836 from commercial feedyards. Eleven molecular breeding value (MBV scores derived from SNP panels of candidate gene polymorphisms and two-leptin gene SNP (UASMS2 and E2FB were evaluated. The empty body fat (EBF and the shrunk body weight (SBW at 28% EBF (AFSBW were computed by the Cattle Value Discovery System (CVDS model using hip height (EBFHH and AFSBWHH or carcass traits (EBFCT and AFSBWCT of the animals. The DOFHH was calculated when AFSBWHH and ADGHH were used and DOFCT was calculated when AFSBWCT and ADGCT were used. The CVDS estimates dry matter required (DMR by individuals fed in groups when observed ADG and AFSBW are provided. The AFSBWCT was assumed more accurate than the AFSBWHH because it was computed using carcass traits. The difference between AFSBWCT and AFSBWHH, DOFCT and DOFHH, and DMR and dry matter intake (DMI were regressed on the MBV scores and leptin gene SNP to explain the variation. Our results indicate quite a large range of correlations among MBV scores and model input and output variables, but MBV ribeye area was the most strongly correlated with the differences in DOF, AFSBW, and DMI by explaining 8, 13.2 and 6.5%, respectively, of the variation. This suggests that specific MBV scores might explain additional variation of input and output variables used by nutritional models in predicting individual animal performance.

  1. An improved model for predicting performance of finned tube heat exchanger under frosting condition, with frost thickness variation along fin

    Energy Technology Data Exchange (ETDEWEB)

    Tso, C.P. [Multimedia University, Jalan Ayer Keroh Lama, Melaka (Malaysia). Faculty of Engineering and Technology; Cheng, Y.C.; Lai, A.C.K. [Nanyang Technological University, Singapore (Singapore). School of Mechanical and Aerospace Engineering

    2006-01-15

    Frost accumulation on a heat exchanger, a direct result of combined heat and mass transfer between the moist air flowing across a cold surface, causes heat transfer performance degradation due to the insulating effect of frost layer and the coil blockage as the frost grows. The complex geometry of finned tube heat exchangers leads to uneven wall and air temperature distribution inside the coil, and causes variations of frost growth rate and densification along the coil. In this study, a general distributed model with frost formation was developed. The equations for finned tube heat exchanger were derived in non-steady-state manner and quasi-steady state in the frost model. In order to make the model more realistic, the variation of frost along fin due to uneven temperature distribution was included. The presented model is able to predict the dynamic behavior of an air cooler both under non-frost and frost condition. Comparisons were made based on the frost mass accumulation, pressure drop across coil and energy transfer coefficient, and results were found to agree well with reported experimental results. (author)

  2. A predictive model for a radioactive contamination in an urban environment and its performance capability to the EMRAS project

    International Nuclear Information System (INIS)

    Hwang, Won Tae; Han, Moon Hee; Jeong, Hyo Joon; Kim, Eun Han

    2008-01-01

    A model, called METRO-K, has been developed for a radiological dose assessment due to a radioactive contamination for the Korean urban environment. The model has been taking part in the Urban Remediation Working Group within the IAEA's EMRAS project to provide an opportunity to compare the modeling approaches and the predictive results of models that describe the behavior of radionuclide in an urban environment. The modeling approaches of METRO-K and the predictive results that have been carried out as a part of the Working Group's activities are presented and discussed. Contribution of contaminated surfaces to absorbed dose rates revealed a distinct difference for the locations of a receptor. (author)

  3. Optimization of biomathematical model predictions for cognitive performance impairment in individuals: accounting for unknown traits and uncertain states in homeostatic and circadian processes.

    Science.gov (United States)

    Van Dongen, Hans P A; Mott, Christopher G; Huang, Jen-Kuang; Mollicone, Daniel J; McKenzie, Frederic D; Dinges, David F

    2007-09-01

    Current biomathematical models of fatigue and performance do not accurately predict cognitive performance for individuals with a priori unknown degrees of trait vulnerability to sleep loss, do not predict performance reliably when initial conditions are uncertain, and do not yield statistically valid estimates of prediction accuracy. These limitations diminish their usefulness for predicting the performance of individuals in operational environments. To overcome these 3 limitations, a novel modeling approach was developed, based on the expansion of a statistical technique called Bayesian forecasting. The expanded Bayesian forecasting procedure was implemented in the two-process model of sleep regulation, which has been used to predict performance on the basis of the combination of a sleep homeostatic process and a circadian process. Employing the two-process model with the Bayesian forecasting procedure to predict performance for individual subjects in the face of unknown traits and uncertain states entailed subject-specific optimization of 3 trait parameters (homeostatic build-up rate, circadian amplitude, and basal performance level) and 2 initial state parameters (initial homeostatic state and circadian phase angle). Prior information about the distribution of the trait parameters in the population at large was extracted from psychomotor vigilance test (PVT) performance measurements in 10 subjects who had participated in a laboratory experiment with 88 h of total sleep deprivation. The PVT performance data of 3 additional subjects in this experiment were set aside beforehand for use in prospective computer simulations. The simulations involved updating the subject-specific model parameters every time the next performance measurement became available, and then predicting performance 24 h ahead. Comparison of the predictions to the subjects' actual data revealed that as more data became available for the individuals at hand, the performance predictions became

  4. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    utilities as partners and users. The new models are evaluated for five wind farms in Denmark as well as one wind farm in Spain. It is shown that the predictions based on conditional parametric models are superior to the predictions obatined by state-of-the-art parametric models.......This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Danish...

  5. Predicting Expressive Dynamics in Piano Performances using Neural Networks

    NARCIS (Netherlands)

    van Herwaarden, Sam; Grachten, Maarten; de Haas, W. Bas

    2014-01-01

    This paper presents a model for predicting expressive accentuation in piano performances with neural networks. Using Restricted Boltzmann Machines (RBMs), features are learned from performance data, after which these features are used to predict performed loudness. During feature learning, data

  6. Predictive performance of two PK-PD models of D2 receptor occupancy of the antipsychotics risperidone and paliperidone in rats

    NARCIS (Netherlands)

    Kozielska, Magdalena; Johnson, Martin; Pilla Reddy, Venkatesh; Vermeulen, An; de Greef, Rik; Li, Cheryl; Grimwood, Sarah; Liu, Jing; Groothuis, Genoveva; Danhof, Meindert; Proost, Johannes

    2010-01-01

    Objectives: The level of dopamine D2 receptor occupancy is predictive of efficacy and safety in schizophrenia. Population PK-PD modelling has been used to link observed plasma and brain concentrations to receptor occupancy. The objective of this study was to compare the predictive performance of two

  7. A comparison of the predictive performance of three pharmacokinetic models for propofol using measured values obtained during target-controlled infusion

    NARCIS (Netherlands)

    Glen, J. B.; White, M.

    2014-01-01

    We compared the predictive performance of the existing Diprifusor and Schnider models, used for target-controlled infusion of propofol, with a new modification of the Diprifusor model (White) incorporating age and sex covariates. The bias and inaccuracy (precision) of each model were determined

  8. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  9. Long-term performance of marine structures in the Netherlands - validation of predictive models for chloride ingress

    NARCIS (Netherlands)

    Breugel, K. van; Polder, R.B.; Rooij, M.R. de

    2017-01-01

    For many concrete infrastructural works a service life of 80, 100 or 200 years is required. To convince owners and authorities that these requirements can be met probability-based models for service life predictions have been developed. These models are based on theoretical and experimental

  10. Prediction of the lifetime productive and reproductive performance of Holstein cows managed for different lactation durations, using a model of lifetime nutrient partitioning

    DEFF Research Database (Denmark)

    Gaillard, Charlotte; Martin, O; Blavy, P

    2016-01-01

    The GARUNS model is a lifetime performance model taking into account the changing physiological priorities of an animal during its life and through repeated reproduction cycles. This dynamic and stochastic model has been previously used to predict the productive and reproductive performance...... of various genotypes of cows across feeding systems. In the present paper, we used this model to predict the lifetime productive and reproductive performance of Holstein cows for different lactation durations, with the aim of determining the lifetime scenario that optimizes cows' performance defined...... by lifetime efficiency (ratio of total milk energy yield to total energy intake) and pregnancy rate. To evaluate the model, data from a 16-mo extended lactation experiment on Holstein cows were used. Generally, the model could consistently fit body weight, milk yield, and milk components of these cows...

  11. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  12. Improving a two-equation eddy-viscosity turbulence model to predict the aerodynamic performance of thick wind turbine airfoils

    Science.gov (United States)

    Bangga, Galih; Kusumadewi, Tri; Hutomo, Go; Sabila, Ahmad; Syawitri, Taurista; Setiadi, Herlambang; Faisal, Muhamad; Wiranegara, Raditya; Hendranata, Yongki; Lastomo, Dwi; Putra, Louis; Kristiadi, Stefanus

    2018-03-01

    Numerical simulations for relatively thick airfoils are carried out in the present studies. An attempt to improve the accuracy of the numerical predictions is done by adjusting the turbulent viscosity of the eddy-viscosity Menter Shear-Stress-Transport (SST) model. The modification involves the addition of a damping factor on the wall-bounded flows incorporating the ratio of the turbulent kinetic energy to its specific dissipation rate for separation detection. The results are compared with available experimental data and CFD simulations using the original Menter SST model. The present model improves the lift polar prediction even though the stall angle is still overestimated. The improvement is caused by the better prediction of separated flow under a strong adverse pressure gradient. The results show that the Reynolds stresses are damped near the wall causing variation of the logarithmic velocity profiles.

  13. Quadratic Prediction Models for the Performance Comparison of a Marine Engine Fuelled with Biodiesels B5 and B20

    Directory of Open Access Journals (Sweden)

    Chedthawut Poompipatpong

    2014-01-01

    Full Text Available According to Thailand’s renewable energy development plan, biodiesel is one of the interesting alternative energies. In this research, biodiesels B5 and B20 are tested in a marine engine. The experimental results are then compared by using three different techniques including (1 the conventional technique, (2 average of the point-to-point comparisons, and (3 a comparison by using quadratic prediction models. This research aims to present the procedures of these techniques in-depth. The results show that the comparison by using quadratic prediction models can accurately predict ample amounts of results and make the comparison more logical. The results are compatible with those of the conventional technique, while the average of the point-to-point comparisons shows diverse results. These results are also explained on the fuel property basis, confirming that the quadratic prediction model and the conventional technique are practical, but the average of the point-to-point comparison technique presents an inaccurate result. The benefit of this research shows that the quadratic prediction model is more flexible for future science and engineering experimental design, thus reducing cost and time usage. The details of the calculation, results, and discussion are presented in the paper.

  14. Predictive accuracy of novel risk factors and markers: A simulation study of the sensitivity of different performance measures for the Cox proportional hazards regression model

    NARCIS (Netherlands)

    P.C. Austin (Peter); Pencinca, M.J. (Michael J.); E.W. Steyerberg (Ewout)

    2017-01-01

    textabstractPredicting outcomes that occur over time is important in clinical, population health, and health services research. We compared changes in different measures of performance when a novel risk factor or marker was added to an existing Cox proportional hazards regression model. We performed

  15. Using high-performance mathematical modelling tools to predict erosion and sediment fluxes in peri-urban catchments

    Science.gov (United States)

    Pereira, André; Conde, Daniel; Ferreira, Carla S. S.; Walsh, Rory; Ferreira, Rui M. L.

    2017-04-01

    Deforestation and urbanization generally lead to increased soil erosion andthrough the indirect effect of increased overland flow and peak flood discharges. Mathematical modelling tools can be helpful for predicting the spatial distribution of erosion and the morphological changes on the channel network. This is especially useful to predict the impacts of land-use changes in parts of the watershed, namely due to urbanization. However, given the size of the computational domain (normally the watershed itself), the need for high spatial resolution data to model accurately sediment transport processes and possible need to model transcritical flows, the computational cost is high and requires high-performance computing techniques. The aim of this work is to present the latest developments of the hydrodynamic and morphological model STAV2D and its applicability to predict runoff and erosion at watershed scale. STAV2D was developed at CEris - Instituto Superior Técnico, Universidade de Lisboa - as a tool particularly appropriated to model strong transient flows in complex and dynamic geometries. It is based on an explicit, first-order 2DH finite-volume discretization scheme for unstructured triangular meshes, in which a flux-splitting technique is paired with a reviewed Roe-Riemann solver, yielding a model applicable to discontinuous flows over time-evolving geometries. STAV2D features solid transport in both Euleran and Lagrangian forms, with the aim of describing the transport of fine natural sediments and then the large individual debris. The model has been validated with theoretical solutions and laboratory experiments (Canelas et al., 2013 & Conde et al., 2015). STAV-2D now supports fully distributed and heterogeneous simulations where multiple different hardware devices can be used to accelerate computation time within a unified Object-Oriented approach: the source code for CPU and GPU has the same compilation units and requires no device specific branches, like

  16. Evaluation of the Performance and the Predictive Capacity of Build-Up and Wash-Off Models on Different Temporal Scales

    Directory of Open Access Journals (Sweden)

    Saja Al Ali

    2016-07-01

    Full Text Available Stormwater quality modeling has arisen as a promising tool to develop mitigation strategies. The aim of this paper is to assess the build-up and wash-off processes and investigate the capacity of several water quality models to accurately simulate and predict the temporal variability of suspended solids concentrations in runoff, based on a long-term data set. A Markov Chain Monte-Carlo (MCMC technique is applied to calibrate the models and analyze the parameter’s uncertainty. The short-term predictive capacity of the models is assessed based on inter- and intra-event approaches. Results suggest that the performance of the wash-off model is related to the dynamic of pollutant transport where the best fit is recorded for first flush events. Assessment of SWMM (Storm Water Management Model exponential build-up model reveals that better performance is obtained on short periods and that build-up models relying only on the antecedent dry weather period as an explanatory variable, cannot predict satisfactorily the accumulated mass on the surface. The predictive inter-event capacity of SWMM exponential model proves its inability to predict the pollutograph while the intra-event approach based on data assimilation proves its efficiency for first flush events only. This method is very interesting for management practices because of its simplicity and easy implementation.

  17. Academic Performance of First-Year Students at a College of Pharmacy in East Tennessee: Models for Prediction

    Science.gov (United States)

    Clavier, Cheri Whitehead

    2013-01-01

    With the increase of students applying to pharmacy programs, it is imperative that admissions committees choose appropriate measures to analyze student readiness. The purpose of this research was to identify significant factors that predict the academic performance, defined as grade point average (GPA) at the end of the first professional year, of…

  18. Chromosomal regions involved in hybrid performance and heterosis : their AFLP-based identification and practical use in prediction models

    NARCIS (Netherlands)

    Vuylsteke, M.; Kuiper, M.; Stam, P.

    2000-01-01

    In this paper, a novel approach towards the prediction of hybrid performance and heterosis is presented. Here, we describe an approach based on: (i) the assessment of associations between AFLPÒ22 AFLPÒ is a registered trademark of Keygene N.V. ,33 The methylation AFLPÒ method is subject to a patent

  19. Performance of a coupled lagged ensemble weather and river runoff prediction model system for the Alpine Ammer River catchment

    Science.gov (United States)

    Smiatek, G.; Kunstmann, H.; Werhahn, J.

    2012-04-01

    The Ammer River catchment located in the Bavarian Ammergau Alps and alpine forelands, Germany, represents with elevations reaching 2185 m and annual mean precipitation between1100 and 2000 mm a very demanding test ground for a river runoff prediction system. Large flooding events in 1999 and 2005 motivated the development of a physically based prediction tool in this area. Such a tool is the coupled high resolution numerical weather and river runoff forecasting system AM-POE that is being studied in several configurations in various experiments starting from the year 2005. Corner stones of the coupled system are the hydrological water balance model WaSiM-ETH run at 100 m grid resolution, the numerical weather prediction model (NWP) MM5 driven at 3.5 km grid cell resolution and the Perl Object Environment (POE) framework. POE implements the input data download from various sources, the input data provision via SOAP based WEB services as well as the runs of the hydrology model both with observed and with NWP predicted meteorology input. The one way coupled system utilizes a lagged ensemble prediction system (EPS) taking into account combination of recent and previous NWP forecasts. Results obtained in the years 2005-2011 reveal that river runoff simulations depict high correlation with observed runoff when driven with monitored observations in hindcast experiments. The ability to runoff forecasts is depending on lead times in the lagged ensemble prediction and shows still limitations resulting from errors in timing and total amount of the predicted precipitation in the complex mountainous area. The presentation describes the system implementation, and demonstrates the application of the POE framework in networking, distributed computing and in the setup of various experiments as well as long term results of the system application in the years 2005 - 2011.

  20. Principles of Sonar Performance Modeling

    NARCIS (Netherlands)

    Ainslie, M.A.

    2010-01-01

    Sonar performance modelling (SPM) is concerned with the prediction of quantitative measures of sonar performance, such as probability of detection. It is a multidisciplinary subject, requiring knowledge and expertise in the disparate fields of underwater acoustics, acoustical oceanography, sonar

  1. What do saliency models predict?

    Science.gov (United States)

    Koehler, Kathryn; Guo, Fei; Zhang, Sheng; Eckstein, Miguel P.

    2014-01-01

    Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments about the most salient image region. Other groups of observers performed a free viewing task, saliency search task, or cued object search task. Behavior on the popular free viewing task was not best predicted by standard saliency models. Instead, the models most accurately predicted the explicit saliency selections and eye movements made while performing saliency judgments. Observers' fixations varied similarly across images for the saliency and free viewing tasks, suggesting that these two tasks are related. The variability of observers' eye movements was modulated by the task (lowest for the object search task and greatest for the free viewing and saliency search tasks) as well as the clutter content of the images. Eye movement variability in saliency search and free viewing might be also limited by inherent variation of what observers consider salient. Our results contribute to understanding the tasks and behavioral measures for which saliency models are best suited as predictors of human behavior, the relationship across various perceptual tasks, and the factors contributing to observer variability in fixational eye movements. PMID:24618107

  2. Prediction of aerodynamic performance for MEXICO rotor

    DEFF Research Database (Denmark)

    Hong, Zedong; Yang, Hua; Xu, Haoran

    2013-01-01

    The aerodynamic performance of the MEXICO (Model EXperiments In Controlled cOnditions) rotor at five tunnel wind speeds is predicted by making use of BEM and CFD methods, respectively, using commercial MATLAB and CFD software. Due to the pressure differences on both sides of the blade, the tip-fl...

  3. Testing the near field/far field model performance for prediction of particulate matter emissions in a paint factory

    DEFF Research Database (Denmark)

    Koivisto, A.J.; Jensen, A.C.Ø.; Levin, Marcus

    2015-01-01

    concentration levels in a paint factory. PM concentration levels were measured during big bag and small bag powder pouring. Rotating drum dustiness indices were determined for the specific powders used and applied in the NF/FF model to predict mass concentrations. Modeled process specific concentration levels...... were adjusted to be similar to the measured concentration levels by adjusting the handling energy factor. The handling energy factors were found to vary considerably depending on the material and process even-though they have the same values as modifying factors in the exposure models. This suggests...

  4. Deliberate practice predicts performance throughout time in adolescent chess players and dropouts: A linear mixed models analysis.

    NARCIS (Netherlands)

    de Bruin, A.B.H.; Smits, N.; Rikers, R.M.J.P.; Schmidt, H.G.

    2008-01-01

    In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training,

  5. SITE-94. Discrete-feature modelling of the Aespoe Site: 3. Predictions of hydrogeological parameters for performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Geier, J.E. [Golder Associates AB, Uppsala (Sweden)

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Predicted parameters for the near field include fracture spacing, fracture aperture, and Darcy velocity at each of forty canister deposition holes. Parameters for the far field include discharge location, Darcy velocity, effective longitudinal dispersion coefficient and head gradient, flow porosity, and flow wetted surface, for each canister source that discharges to the biosphere. Results are presented in the form of statistical summaries for a total of 42 calculation cases, which treat a set of 25 model variants in various combinations. The variants for the SITE-94 Reference Case model address conceptual and parametric uncertainty related to the site-scale hydrogeologic model and its properties, the fracture network within the repository, effective semi regional boundary conditions for the model, and the disturbed-rock zone around the repository tunnels and shafts. Two calculation cases simulate hydrologic conditions that are predicted to occur during future glacial episodes. 30 refs.

  6. Theoretical models to predict the transient heat transfer performance of HIFAR fuel elements under non-forced convective conditions

    International Nuclear Information System (INIS)

    Green, W.J.

    1987-04-01

    Simple theoretical models have been developed which are suitable for predicting the thermal responses of irradiated research fuel elements of markedly different geometries when they are subjected to loss-of-coolant accident conditions. These models have been used to calculate temperature responses corresponding to various non-forced convective conditions. Comparisons between experimentally observed temperatures and calculated values have shown that a suitable value for surface thermal emissivity is 0.35; modelling of the fuel element beyond the region of the fuel plate needs to be included since these areas account for approximately 25 per cent of the thermal power dissipated; general agreement between calculated and experimental temperatures for both transient and steady-state conditions is good - the maximum discrepancy between calculated and experimental temperatures for a HIFAR Mark IV/V fuel element is ∼ 70 deg C, and for an Oak Ridge Reactor (ORR) box-type fuel element ∼ 30 deg C; and axial power distribution does not significantly affect thermal responses for the conditions investigated. Overall, the comparisons have shown that the models evolved can reproduce experimental data to a level of accuracy that provides confidence in the modelling technique and the postulated heat dissipation mechanisms, and that these models can be used to predict thermal responses of fuel elements in accident conditions that are not easily investigated experimentally

  7. Clinical Prediction Performance of Glaucoma Progression Using a 2-Dimensional Continuous-Time Hidden Markov Model with Structural and Functional Measurements.

    Science.gov (United States)

    Song, Youngseok; Ishikawa, Hiroshi; Wu, Mengfei; Liu, Yu-Ying; Lucy, Katie A; Lavinsky, Fabio; Liu, Mengling; Wollstein, Gadi; Schuman, Joel S

    2018-03-20

    Previously, we introduced a state-based 2-dimensional continuous-time hidden Markov model (2D CT HMM) to model the pattern of detected glaucoma changes using structural and functional information simultaneously. The purpose of this study was to evaluate the detected glaucoma change prediction performance of the model in a real clinical setting using a retrospective longitudinal dataset. Longitudinal, retrospective study. One hundred thirty-four eyes from 134 participants diagnosed with glaucoma or as glaucoma suspects (average follow-up, 4.4±1.2 years; average number of visits, 7.1±1.8). A 2D CT HMM model was trained using OCT (Cirrus HD-OCT; Zeiss, Dublin, CA) average circumpapillary retinal nerve fiber layer (cRNFL) thickness and visual field index (VFI) or mean deviation (MD; Humphrey Field Analyzer; Zeiss). The model was trained using a subset of the data (107 of 134 eyes [80%]) including all visits except for the last visit, which was used to test the prediction performance (training set). Additionally, the remaining 27 eyes were used for secondary performance testing as an independent group (validation set). The 2D CT HMM predicts 1 of 4 possible detected state changes based on 1 input state. Prediction accuracy was assessed as the percentage of correct prediction against the patient's actual recorded state. In addition, deviations of the predicted long-term detected change paths from the actual detected change paths were measured. Baseline mean ± standard deviation age was 61.9±11.4 years, VFI was 90.7±17.4, MD was -3.50±6.04 dB, and cRNFL thickness was 74.9±12.2 μm. The accuracy of detected glaucoma change prediction using the training set was comparable with the validation set (57.0% and 68.0%, respectively). Prediction deviation from the actual detected change path showed stability throughout patient follow-up. The 2D CT HMM demonstrated promising prediction performance in detecting glaucoma change performance in a simulated clinical setting

  8. An Examination of Pennsylvania's Classroom Diagnostic Testing as a Predictive Model of Pennsylvania System of School Assessment Performance

    Science.gov (United States)

    Matsanka, Christopher

    2017-01-01

    The purpose of this non-experimental quantitative study was to investigate the relationship between Pennsylvania's Classroom Diagnostic Tools (CDT) interim assessments and the state-mandated Pennsylvania System of School Assessment (PSSA) and to create linear regression equations that could be used as models to predict student performance on the…

  9. A Five-Stage Prediction-Observation-Explanation Inquiry-Based Learning Model to Improve Students' Learning Performance in Science Courses

    Science.gov (United States)

    Hsiao, Hsien-Sheng; Chen, Jyun-Chen; Hong, Jon-Chao; Chen, Po-Hsi; Lu, Chow-Chin; Chen, Sherry Y.

    2017-01-01

    A five-stage prediction-observation-explanation inquiry-based learning (FPOEIL) model was developed to improve students' scientific learning performance. In order to intensify the science learning effect, the repertory grid technology-assisted learning (RGTL) approach and the collaborative learning (CL) approach were utilized. A quasi-experimental…

  10. Genomic Prediction of Barley Hybrid Performance

    Directory of Open Access Journals (Sweden)

    Norman Philipp

    2016-07-01

    Full Text Available Hybrid breeding in barley ( L. offers great opportunities to accelerate the rate of genetic improvement and to boost yield stability. A crucial requirement consists of the efficient selection of superior hybrid combinations. We used comprehensive phenotypic and genomic data from a commercial breeding program with the goal of examining the potential to predict the hybrid performances. The phenotypic data were comprised of replicated grain yield trials for 385 two-way and 408 three-way hybrids evaluated in up to 47 environments. The parental lines were genotyped using a 3k single nucleotide polymorphism (SNP array based on an Illumina Infinium assay. We implemented ridge regression best linear unbiased prediction modeling for additive and dominance effects and evaluated the prediction ability using five-fold cross validations. The prediction ability of hybrid performances based on general combining ability (GCA effects was moderate, amounting to 0.56 and 0.48 for two- and three-way hybrids, respectively. The potential of GCA-based hybrid prediction requires that both parental components have been evaluated in a hybrid background. This is not necessary for genomic prediction for which we also observed moderate cross-validated prediction abilities of 0.51 and 0.58 for two- and three-way hybrids, respectively. This exemplifies the potential of genomic prediction in hybrid barley. Interestingly, prediction ability using the two-way hybrids as training population and the three-way hybrids as test population or vice versa was low, presumably, because of the different genetic makeup of the parental source populations. Consequently, further research is needed to optimize genomic prediction approaches combining different source populations in barley.

  11. Validation of a zero-dimensional model for prediction of NOx and engine performance for electronically controlled marine two-stroke diesel engines

    International Nuclear Information System (INIS)

    Scappin, Fabio; Stefansson, Sigurður H.; Haglind, Fredrik; Andreasen, Anders; Larsen, Ulrik

    2012-01-01

    The aim of this paper is to derive a methodology suitable for energy system analysis for predicting the performance and NO x emissions of marine low speed diesel engines. The paper describes a zero-dimensional model, evaluating the engine performance by means of an energy balance and a two zone combustion model using ideal gas law equations over a complete crank cycle. The combustion process is divided into intervals, and the product composition and flame temperature are calculated in each interval. The NO x emissions are predicted using the extended Zeldovich mechanism. The model is validated using experimental data from two MAN B and W engines; one case being data subject to engine parameter changes corresponding to simulating an electronically controlled engine; the second case providing data covering almost all model input and output parameters. The first case of validation suggests that the model can predict specific fuel oil consumption and NO x emissions within the 95% confidence intervals given by the experimental measurements. The second validation confirms the capability of the model to match measured engine output parameters based on measured engine input parameters with a maximum 5% deviation. - Highlights: ► A fast realistic model of a marine two-stroke low speed diesel engine was derived. ► The model is fast and accurate enough for future complex energy systems analysis. ► The effects of engine tuning were validated with experimental tests. ► The model was validated while constrained by experimental input and output data.

  12. Validation of a zero-dimensional model for prediction of NOx and engine performance for electronically controlled marine two-stroke diesel engines

    DEFF Research Database (Denmark)

    Scappin, Fabio; Stefansson, Sigurður H.; Haglind, Fredrik

    2012-01-01

    The aim of this paper is to derive a methodology suitable for energy system analysis for predicting the performance and NOx emissions of marine low speed diesel engines. The paper describes a zero-dimensional model, evaluating the engine performance by means of an energy balance and a two zone...... experimental data from two MAN B&W engines; one case being data subject to engine parameter changes corresponding to simulating an electronically controlled engine; the second case providing data covering almost all model input and output parameters. The first case of validation suggests that the model can...... predict specific fuel oil consumption and NOx emissions within the 95% confidence intervals given by the experimental measurements. The second validation confirms the capability of the model to match measured engine output parameters based on measured engine input parameters with a maximum 5% deviation....

  13. Meta-analysis approach as a gene selection method in class prediction: does it improve model performance? A case study in acute myeloid leukemia.

    Science.gov (United States)

    Novianti, Putri W; Jong, Victor L; Roes, Kit C B; Eijkemans, Marinus J C

    2017-04-11

    Aggregating gene expression data across experiments via meta-analysis is expected to increase the precision of the effect estimates and to increase the statistical power to detect a certain fold change. This study evaluates the potential benefit of using a meta-analysis approach as a gene selection method prior to predictive modeling in gene expression data. Six raw datasets from different gene expression experiments in acute myeloid leukemia (AML) and 11 different classification methods were used to build classification models to classify samples as either AML or healthy control. First, the classification models were trained on gene expression data from single experiments using conventional supervised variable selection and externally validated with the other five gene expression datasets (referred to as the individual-classification approach). Next, gene selection was performed through meta-analysis on four datasets, and predictive models were trained with the selected genes on the fifth dataset and validated on the sixth dataset. For some datasets, gene selection through meta-analysis helped classification models to achieve higher performance as compared to predictive modeling based on a single dataset; but for others, there was no major improvement. Synthetic datasets were generated from nine simulation scenarios. The effect of sample size, fold change and pairwise correlation between differentially expressed (DE) genes on the difference between MA- and individual-classification model was evaluated. The fold change and pairwise correlation significantly contributed to the difference in performance between the two methods. The gene selection via meta-analysis approach was more effective when it was conducted using a set of data with low fold change and high pairwise correlation on the DE genes. Gene selection through meta-analysis on previously published studies potentially improves the performance of a predictive model on a given gene expression data.

  14. A three-step approach for the derivation and validation of high-performing predictive models using an operational dataset: congestive heart failure readmission case study.

    Science.gov (United States)

    AbdelRahman, Samir E; Zhang, Mingyuan; Bray, Bruce E; Kawamoto, Kensaku

    2014-05-27

    The aim of this study was to propose an analytical approach to develop high-performing predictive models for congestive heart failure (CHF) readmission using an operational dataset with incomplete records and changing data over time. Our analytical approach involves three steps: pre-processing, systematic model development, and risk factor analysis. For pre-processing, variables that were absent in >50% of records were removed. Moreover, the dataset was divided into a validation dataset and derivation datasets which were separated into three temporal subsets based on changes to the data over time. For systematic model development, using the different temporal datasets and the remaining explanatory variables, the models were developed by combining the use of various (i) statistical analyses to explore the relationships between the validation and the derivation datasets; (ii) adjustment methods for handling missing values; (iii) classifiers; (iv) feature selection methods; and (iv) discretization methods. We then selected the best derivation dataset and the models with the highest predictive performance. For risk factor analysis, factors in the highest-performing predictive models were analyzed and ranked using (i) statistical analyses of the best derivation dataset, (ii) feature rankers, and (iii) a newly developed algorithm to categorize risk factors as being strong, regular, or weak. The analysis dataset consisted of 2,787 CHF hospitalizations at University of Utah Health Care from January 2003 to June 2013. In this study, we used the complete-case analysis and mean-based imputation adjustment methods; the wrapper subset feature selection method; and four ranking strategies based on information gain, gain ratio, symmetrical uncertainty, and wrapper subset feature evaluators. The best-performing models resulted from the use of a complete-case analysis derivation dataset combined with the Class-Attribute Contingency Coefficient discretization method and a voting

  15. Predictive model for bacteremia in adult patients with blood cultures performed at the emergency department: a preliminary report.

    Science.gov (United States)

    Su, Chan-Ping; Chen, Tony Hsiu-Hsi; Chen, Shey-Ying; Ghiang, Wen-Chu; Wu, Grace Hwei-Min; Sun, Hsin-Yun; Lee, Chien-Cheng; Wang, Jiun-Ling; Chang, Shan-Chwen; Chen, Yee-Chun; Yen, Amy Ming-Fang; Chen, Wen-Jone; Hsueh, Po-Ren

    2011-12-01

    Useful predictive models for identifying patients at high risk of bacteremia at the emergency department (ED) are lacking. This study attempted to provide useful predictive models for identifying patients at high risk of bacteremia at the ED. A prospective cohort study was conducted at the ED of a tertiary care hospital from October 1 to November 30, 2004. Patients aged 15 years or older, who had at least two sets of blood culture, were recruited. Data were analyzed on selected covariates, including demographic characteristics, predisposing conditions, clinical presentations, laboratory tests, and presumptive diagnosis, at the ED. An iterative procedure was used to build up a logistic model, which was then simplified into a coefficient-based scoring system. A total of 558 patients with 84 episodes of true bacteremia were enrolled. Predictors of bacteremia and their assigned scores were as follows: fever greater than or equal to 38.3°C [odds ratio (OR), 2.64], 1 point; tachycardia greater than or equal to 120/min (OR, 2.521), 1 point; lymphopenia less than 0.5×10(3)/μL (OR, 3.356), 2 points; aspartate transaminase greater than 40IU/L (OR, 2.355), 1 point; C-reactive protein greater than 10mg/dL (OR, 2.226), 1 point; procalcitonin greater than 0.5 ng/mL (OR, 3.147), 2 points; and presumptive diagnosis of respiratory tract infection (OR, 0.236), -2 points. The area under the receiver operating characteristic curves of the original logistic model and the simplified scoring model using the aforementioned seven predictors and their assigned scores were 0.854 (95% confidence interval, 0.806-0.902) and 0.845 (95% confidence interval, 0.798-0.894), respectively. This simplified scoring system could rapidly identify high-risk patients of bacteremia at the ED. Copyright © 2011. Published by Elsevier B.V.

  16. Effects of design variables predicted by a steady - state thermal performance analysis model of a loop heat pipe

    International Nuclear Information System (INIS)

    Jung, Eui Guk; Boo, Joon Hong

    2008-01-01

    This study deals with a mathematical modeling for the steady-state temperature characteristics of an entire loop heat pipe. The lumped layer model was applied to each node for temperature analysis. The flat type evaporator and condenser in the model had planar dimensions of 40 mm (W) x 50 mm (L). The wick material was a sintered metal and the working fluid was methanol. The molecular kinetic theory was employed to model the phase change phenomena in the evaporator and the condenser. Liquid-vapor interface configuration was expressed by the thin film theories available in the literature. Effects of design factors of loop heat pipe on the thermal performance were investigated by the modeling proposed in this study

  17. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  18. The impacts of data constraints on the predictive performance of a general process-based crop model (PeakN-crop v1.0)

    Science.gov (United States)

    Caldararu, Silvia; Purves, Drew W.; Smith, Matthew J.

    2017-04-01

    Improving international food security under a changing climate and increasing human population will be greatly aided by improving our ability to modify, understand and predict crop growth. What we predominantly have at our disposal are either process-based models of crop physiology or statistical analyses of yield datasets, both of which suffer from various sources of error. In this paper, we present a generic process-based crop model (PeakN-crop v1.0) which we parametrise using a Bayesian model-fitting algorithm to three different sources: data-space-based vegetation indices, eddy covariance productivity measurements and regional crop yields. We show that the model parametrised without data, based on prior knowledge of the parameters, can largely capture the observed behaviour but the data-constrained model greatly improves both the model fit and reduces prediction uncertainty. We investigate the extent to which each dataset contributes to the model performance and show that while all data improve on the prior model fit, the satellite-based data and crop yield estimates are particularly important for reducing model error and uncertainty. Despite these improvements, we conclude that there are still significant knowledge gaps, in terms of available data for model parametrisation, but our study can help indicate the necessary data collection to improve our predictions of crop yields and crop responses to environmental changes.

  19. Using Modeling and Simulation to Predict Operator Performance and Automation-Induced Complacency With Robotic Automation: A Case Study and Empirical Validation.

    Science.gov (United States)

    Wickens, Christopher D; Sebok, Angelia; Li, Huiyang; Sarter, Nadine; Gacy, Andrew M

    2015-09-01

    The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Some computational models of complacency in human-automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development. © 2015, Human Factors and Ergonomics Society.

  20. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  1. Effects of Different Missing Data Imputation Techniques on the Performance of Undiagnosed Diabetes Risk Prediction Models in a Mixed-Ancestry Population of South Africa.

    Directory of Open Access Journals (Sweden)

    Katya L Masconi

    Full Text Available Imputation techniques used to handle missing data are based on the principle of replacement. It is widely advocated that multiple imputation is superior to other imputation methods, however studies have suggested that simple methods for filling missing data can be just as accurate as complex methods. The objective of this study was to implement a number of simple and more complex imputation methods, and assess the effect of these techniques on the performance of undiagnosed diabetes risk prediction models during external validation.Data from the Cape Town Bellville-South cohort served as the basis for this study. Imputation methods and models were identified via recent systematic reviews. Models' discrimination was assessed and compared using C-statistic and non-parametric methods, before and after recalibration through simple intercept adjustment.The study sample consisted of 1256 individuals, of whom 173 were excluded due to previously diagnosed diabetes. Of the final 1083 individuals, 329 (30.4% had missing data. Family history had the highest proportion of missing data (25%. Imputation of the outcome, undiagnosed diabetes, was highest in stochastic regression imputation (163 individuals. Overall, deletion resulted in the lowest model performances while simple imputation yielded the highest C-statistic for the Cambridge Diabetes Risk model, Kuwaiti Risk model, Omani Diabetes Risk model and Rotterdam Predictive model. Multiple imputation only yielded the highest C-statistic for the Rotterdam Predictive model, which were matched by simpler imputation methods.Deletion was confirmed as a poor technique for handling missing data. However, despite the emphasized disadvantages of simpler imputation methods, this study showed that implementing these methods results in similar predictive utility for undiagnosed diabetes when compared to multiple imputation.

  2. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  3. Predictive Model Assessment for Count Data

    National Research Council Canada - National Science Library

    Czado, Claudia; Gneiting, Tilmann; Held, Leonhard

    2007-01-01

    .... In case studies, we critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. Key words: Calibration...

  4. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  5. Performance of an easy-to-use prediction model for renal patient survival: an external validation study using data from the ERA-EDTA Registry.

    Science.gov (United States)

    Hemke, Aline C; Heemskerk, Martin B A; van Diepen, Merel; Kramer, Anneke; de Meester, Johan; Heaf, James G; Abad Diez, José Maria; Torres Guinea, Marta; Finne, Patrik; Brunet, Philippe; Vikse, Bjørn E; Caskey, Fergus J; Traynor, Jamie P; Massy, Ziad A; Couchoud, Cécile; Groothoff, Jaap W; Nordio, Maurizio; Jager, Kitty J; Dekker, Friedo W; Hoitsma, Andries J

    2018-01-16

    An easy-to-use prediction model for long-term renal patient survival based on only four predictors [age, primary renal disease, sex and therapy at 90 days after the start of renal replacement therapy (RRT)] has been developed in The Netherlands. To assess the usability of this model for use in Europe, we externally validated the model in 10 European countries. Data from the European Renal Association-European Dialysis and Transplant Association (ERA-EDTA) Registry were used. Ten countries that reported individual patient data to the registry on patients starting RRT in the period 1995-2005 were included. Patients prediction model was evaluated for the 10- (primary endpoint), 5- and 3-year survival predictions by assessing the calibration and discrimination outcomes. We used a data set of 136 304 patients from 10 countries. The calibration in the large and calibration plots for 10 deciles of predicted survival probabilities showed average differences of 1.5, 3.2 and 3.4% in observed versus predicted 10-, 5- and 3-year survival, with some small variation on the country level. The concordance index, indicating the discriminatory power of the model, was 0.71 in the complete ERA-EDTA Registry cohort and varied according to country level between 0.70 and 0.75. A prediction model for long-term renal patient survival developed in a single country, based on only four easily available variables, has a comparably adequate performance in a wide range of other European countries. © The Author(s) 2018. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. On the development and performance evaluation of a multiobjective GA-based RBF adaptive model for the prediction of stock indices

    Directory of Open Access Journals (Sweden)

    Babita Majhi

    2014-09-01

    Full Text Available This paper develops and assesses the performance of a hybrid prediction model using a radial basis function neural network and non-dominated sorting multiobjective genetic algorithm-II (NSGA-II for various stock market forecasts. The proposed technique simultaneously optimizes two mutually conflicting objectives: the structure (the number of centers in the hidden layer and the output mean square error (MSE of the model. The best compromised non-dominated solution-based model was determined from the optimal Pareto front using fuzzy set theory. The performances of this model were evaluated in terms of four different measures using Standard and Poor 500 (S&P500 and Dow Jones Industrial Average (DJIA stock data. The results of the simulation of the new model demonstrate a prediction performance superior to that of the conventional radial basis function (RBF-based forecasting model in terms of the mean average percentage error (MAPE, directional accuracy (DA, Thelis’ U and average relative variance (ARV values.

  7. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  8. Large scale model predictions on the effect of GDL thermal conductivity and porosity on PEM fuel cell performance

    Directory of Open Access Journals (Sweden)

    Obaid ur Rehman

    2017-12-01

    Full Text Available The performance of proton exchange membrane (PEM fuel cell majorly relies on properties of gas diffusion layer (GDL which supports heat and mass transfer across the membrane electrode assembly. A novel approach is adopted in this work to analyze the activity of GDL during fuel cell operation on a large-scale model. The model with mesh size of 1.3 million computational cells for 50 cm2 active area was simulated by parallel computing technique via computer cluster. Grid independence study showed less than 5% deviation in criterion parameter as mesh size was increased to 1.8 million cells. Good approximation was achieved as model was validated with the experimental data for Pt loading of 1 mg cm-2. The results showed that GDL with higher thermal conductivity prevented PEM from drying and led to improved protonic conduction. GDL with higher porosity enhanced the reaction but resulted in low output voltage which demonstrated the effect of contact resistance. In addition, reduced porosity under the rib regions was significant which resulted in lower gas diffusion and heat and water accumulation.

  9. Exploring the performance of the SEDD model to predict sediment yield in eucalyptus plantations. Long-term results from an experimental catchment in Southern Italy

    Science.gov (United States)

    Porto, P.; Cogliandro, V.; Callegari, G.

    2018-01-01

    In this paper, long-term sediment yield data, collected in a small (1.38 ha) Calabrian catchment (W2), reafforested with eucalyptus trees (Eucalyptus occidentalis Engl.) are used to validate the performance of the SEdiment Delivery Distributed Model (SEDD) in areas with high erosion rates. At first step, the SEDD model was calibrated using field data collected in previous field campaigns undertaken during the period 1978-1994. This first phase allowed the model calibration parameter β to be calculated using direct measurements of rainfall, runoff, and sediment output. The model was then validated in its calibrated form for an independent period (2006-2016) for which new measurements of rainfall, runoff and sediment output are also available. The analysis, carried out at event and annual scale showed good agreement between measured and predicted values of sediment yield and suggested that the SEDD model can be seen as an appropriate means of evaluating erosion risk associated with manmade plantations in marginal areas. Further work is however required to test the performance of the SEDD model as a prediction tool in different geomorphic contexts.

  10. Model Performance Evaluation and Scenario Analysis (MPESA)

    Science.gov (United States)

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  11. Model Predictive Control Fundamentals | Orukpe | Nigerian Journal ...

    African Journals Online (AJOL)

    Model Predictive Control (MPC) has developed considerably over the last two decades, both within the research control community and in industries. MPC strategy involves the optimization of a performance index with respect to some future control sequence, using predictions of the output signal based on a process model, ...

  12. Past performance of assisted reproduction technologies as a model to predict future progress: a proposed addendum to Moore's law.

    Science.gov (United States)

    Cohen, Jacques; Alikani, Mina; Bisignano, Alexander

    2012-12-01

    The ultimate goal of IVF is to achieve healthy, single, live births following each single-embryo transfer. A timeline for this eventuality has never been defined. National implantation rates from 2003-2010 provided by the Society for Assisted Reproductive Technologies (SART) in the USA were evaluated. Regression analysis was applied to the annual trends. A high correlation was noted showing a linear increase from year to year ranging between 0.3% and 1.5% when maternal age was not higher than 42. This relationship can be retrospectively applied to earlier SART data reports. This incline may be partly technology driven and resembles Moore's law, which describes annual improvements in microchip performance. Based on the assumption that technology will continue to drive progress, the length of time required to reach 100% implantation was calculated. The interval varied between 43 years (AD 2053) for the youngest age group (Assisted Reproductive Technologies (SART; www.sart.org). Through SART, individual clinic's outcomes may be assessed. Although live birth and pregnancy are considered the gold standard of success, the investigators took the approach that those outcomes are often biased due to transfer of multiple embryos. The present analysis was therefore performed on individual embryos, by using the implantation rate to compare national and individual clinic datasets. National implantation rates show a linear increase from year to year ranging between 0.3% and 1.5% for patients aged technology driven. This is an intriguing effect also seen in the computer industry where there has been a doubling of computer speed and memory for the past 47 years, a phenomenon anticipated by Moore's law. We predict that the annual increase in implantation will also continue as new technologies become available. Based on current trends, the length of time for 100% implantation rates was calculated. Time to achieving 100% implantation varied between 43 years (AD 2053) for the youngest

  13. Performance of STICS model to predict rainfed corn evapotranspiration and biomass evaluated for 6 years between 1995 and 2006 using daily aggregated eddy covariance fluxes and ancillary measurements.

    Science.gov (United States)

    Pattey, Elizabeth; Jégo, Guillaume; Bourgeois, Gaétan

    2010-05-01

    Verifying the performance of process-based crop growth models to predict evapotranspiration and crop biomass is a key component of the adaptation of agricultural crop production to climate variations. STICS, developed by INRA, was part of the models selected by Agriculture and Agri-Food Canada to be implemented for environmental assessment studies on climate variations, because of its built-in ability to assimilate biophysical descriptors such as LAI derived from satellite imagery and its open architecture. The model prediction of shoot biomass was calibrated using destructive biomass measurements over one season, by adjusting six cultivar parameters and three generic plant parameters to define two grain corn cultivars adapted to the 1000-km long Mixedwood Plains ecozone. Its performance was then evaluated using a database of 40 years-sites of corn destructive biomass and yield. In this study we evaluate the temporal response of STICS evapotranspiration and biomass accumulation predictions against estimates using daily aggregated eddy covariance fluxes. The flux tower was located in an experimental farm south of Ottawa and measurements carried out over corn fields in 1995, 1996, 1998, 2000, 2002 and 2006. Daytime and nighttime fluxes were QC/QA and gap-filled separately. Soil respiration was partitioned to calculate the corn net daily CO2 uptake, which was converted into dry biomass. Out of the six growing seasons, three (1995, 1998, 2002) had water stress periods during corn grain filling. Year 2000 was cool and wet, while 1996 had heat and rainfall distributed evenly over the season and 2006 had a wet spring. STICS can predict evapotranspiration using either crop coefficients, when wind speed and air moisture are not available, or resistance. The first approach provided higher prediction for all the years than the resistance approach and the flux measurements. The dynamic of evapotranspiration prediction of STICS was very good for the growing seasons without

  14. Predicting the Performance of Organic Corrosion Inhibitors

    Directory of Open Access Journals (Sweden)

    David A. Winkler

    2017-12-01

    Full Text Available The withdrawal of effective but toxic corrosion inhibitors has provided an impetus for the discovery of new, benign organic compounds to fill that role. Concurrently, developments in the high-throughput synthesis of organic compounds, the establishment of large libraries of available chemicals, accelerated corrosion inhibition testing technologies, and the increased capability of machine learning methods have made discovery of new corrosion inhibitors much faster and cheaper than it used to be. We summarize these technical developments in the corrosion inhibition field and describe how data-driven machine learning methods can generate models linking molecular properties to corrosion inhibition that can be used to predict the performance of materials not yet synthesized or tested. We briefly summarize the literature on quantitative structure–property relationships models of small organic molecule corrosion inhibitors. The success of these models provides a paradigm for rapid discovery of novel, effective corrosion inhibitors for a range of metals and alloys in diverse environments.

  15. Numerical Stability and Control Analysis Towards Falling-Leaf Prediction Capabilities of Splitflow for Two Generic High-Performance Aircraft Models

    Science.gov (United States)

    Charlton, Eric F.

    1998-01-01

    Aerodynamic analysis are performed using the Lockheed-Martin Tactical Aircraft Systems (LMTAS) Splitflow computational fluid dynamics code to investigate the computational prediction capabilities for vortex-dominated flow fields of two different tailless aircraft models at large angles of attack and sideslip. These computations are performed with the goal of providing useful stability and control data to designers of high performance aircraft. Appropriate metrics for accuracy, time, and ease of use are determined in consultations with both the LMTAS Advanced Design and Stability and Control groups. Results are obtained and compared to wind-tunnel data for all six components of forces and moments. Moment data is combined to form a "falling leaf" stability analysis. Finally, a handful of viscous simulations were also performed to further investigate nonlinearities and possible viscous effects in the differences between the accumulated inviscid computational and experimental data.

  16. Multiprocessor performance modeling with ADAS

    Science.gov (United States)

    Hayes, Paul J.; Andrews, Asa M.

    1989-01-01

    A graph managing strategy referred to as the Algorithm to Architecture Mapping Model (ATAMM) appears useful for the time-optimized execution of application algorithm graphs in embedded multiprocessors and for the performance prediction of graph designs. This paper reports the modeling of ATAMM in the Architecture Design and Assessment System (ADAS) to make an independent verification of ATAMM's performance prediction capability and to provide a user framework for the evaluation of arbitrary algorithm graphs. Following an overview of ATAMM and its major functional rules are descriptions of the ADAS model of ATAMM, methods to enter an arbitrary graph into the model, and techniques to analyze the simulation results. The performance of a 7-node graph example is evaluated using the ADAS model and verifies the ATAMM concept by substantiating previously published performance results.

  17. The Prediction Performance of Asset Pricing Models and Their Capability of Capturing the Effects of Economic Crises: The Case of Istanbul Stock Exchange

    Directory of Open Access Journals (Sweden)

    Erol Muzır

    2010-09-01

    Full Text Available This paper is prepared to test the common opinion that the multifactor asset pricing models produce superior predictions as compared to the single factor models and to evaluate the performance of Arbitrage Pricing Theory (APT and Capital Asset Pricing Model (CAPM. For this purpose, the monthly return data from January 1996 and December 2004 of the stocks of 45 firms listed at Istanbul Stock Exchange were used. Our factor analysis results show that 68,3 % of the return variation can be explained by five factors. Although the APT model has generated a low coefficient of determination, 28,3 %, it proves to be more competent in explaining stock return changes when compared to CAPM which has an inferior explanation power, 5,4 %. Furthermore, we have observed that APT is more robust also in capturing the effects of any economic crisis on return variations.

  18. Effect of water depth on the performance of intelligent computing models in predicting wave transmission of floating pipe breakwater.

    Digital Repository Service at National Institute of Oceanography (India)

    Patil, S.G.; Mandal, S.; Hegde, A.V.

    and yi is the predicted value. Step 3. (New population): In this step new population is created by repeating the following steps until the new population is complete i) [Selection]: In the present study, two parent chromosomes from a population... are selected according to fitness function (eqn. 14). The roulette wheel selection principle (Holland, 1975) is used to select chromosomes for reproduction ii) [Crossover]: Here with crossover probability crossover of the parents is done to form new offspring’s...

  19. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  20. Uncertainty aggregation and reduction in structure-material performance prediction

    Science.gov (United States)

    Hu, Zhen; Mahadevan, Sankaran; Ao, Dan

    2018-02-01

    An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.

  1. Predicting performance : relative importance of students' background and past performance

    NARCIS (Netherlands)

    Stegers-Jager, Karen M.; Themmen, Axel P. N.; Cohen-Schotanus, Janke; Steyerberg, Ewout W.

    ContextDespite evidence for the predictive value of both pre-admission characteristics and past performance at medical school, their relative contribution to predicting medical school performance has not been thoroughly investigated. ObjectivesThis study was designed to determine the relative

  2. Beef Species Symposium: an assessment of the 1996 Beef NRC: metabolizable protein supply and demand and effectiveness of model performance prediction of beef females within extensive grazing systems.

    Science.gov (United States)

    Waterman, R C; Caton, J S; Löest, C A; Petersen, M K; Roberts, A J

    2014-07-01

    Interannual variation of forage quantity and quality driven by precipitation events influence beef livestock production systems within the Southern and Northern Plains and Pacific West, which combined represent 60% (approximately 17.5 million) of the total beef cows in the United States. The beef cattle requirements published by the NRC are an important tool and excellent resource for both professionals and producers to use when implementing feeding practices and nutritional programs within the various production systems. The objectives of this paper include evaluation of the 1996 Beef NRC model in terms of effectiveness in predicting extensive range beef cow performance within arid and semiarid environments using available data sets, identifying model inefficiencies that could be refined to improve the precision of predicting protein supply and demand for range beef cows, and last, providing recommendations for future areas of research. An important addition to the current Beef NRC model would be to allow users to provide region-specific forage characteristics and the ability to describe supplement composition, amount, and delivery frequency. Beef NRC models would then need to be modified to account for the N recycling that occurs throughout a supplementation interval and the impact that this would have on microbial efficiency and microbial protein supply. The Beef NRC should also consider the role of ruminal and postruminal supply and demand of specific limiting AA. Additional considerations should include the partitioning effects of nitrogenous compounds under different physiological production stages (e.g., lactation, pregnancy, and periods of BW loss). The intent of information provided is to aid revision of the Beef NRC by providing supporting material for changes and identifying gaps in existing scientific literature where future research is needed to enhance the predictive precision and application of the Beef NRC models.

  3. Improved Prediction Models For PCC Pavement Performance-Related Specifications, Volume II: PAVESPEC 3.0 User's Guide

    Science.gov (United States)

    2000-09-01

    The current performance-related specifications (PRS) methodology has been under development by the Federal Highway Administration (FHWA) for several years and has now reached a level at which it can be implemented by State highway agencies. PRS for h...

  4. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  5. Gas-particle partitioning of semi-volatile organics on organic aerosols using a predictive activity coefficient model: analysis of the effects of parameter choices on model performance

    Science.gov (United States)

    Chandramouli, Bharadwaj; Jang, Myoseon; Kamens, Richard M.

    The partitioning of a diverse set of semivolatile organic compounds (SOCs) on a variety of organic aerosols was studied using smog chamber experimental data. Existing data on the partitioning of SOCs on aerosols from wood combustion, diesel combustion, and the α-pinene-O 3 reaction was augmented by carrying out smog chamber partitioning experiments on aerosols from meat cooking, and catalyzed and uncatalyzed gasoline engine exhaust. Model compositions for aerosols from meat cooking and gasoline combustion emissions were used to calculate activity coefficients for the SOCs in the organic aerosols and the Pankow absorptive gas/particle partitioning model was used to calculate the partitioning coefficient Kp and quantitate the predictive improvements of using the activity coefficient. The slope of the log K p vs. log p L0 correlation for partitioning on aerosols from meat cooking improved from -0.81 to -0.94 after incorporation of activity coefficients iγ om. A stepwise regression analysis of the partitioning model revealed that for the data set used in this study, partitioning predictions on α-pinene-O 3 secondary aerosol and wood combustion aerosol showed statistically significant improvement after incorporation of iγ om, which can be attributed to their overall polarity. The partitioning model was sensitive to changes in aerosol composition when updated compositions for α-pinene-O 3 aerosol and wood combustion aerosol were used. The octanol-air partitioning coefficient's ( KOA) effectiveness as a partitioning correlator over a variety of aerosol types was evaluated. The slope of the log K p- log K OA correlation was not constant over the aerosol types and SOCs used in the study and the use of KOA for partitioning correlations can potentially lead to significant deviations, especially for polar aerosols.

  6. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  7. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  8. Proactive Supply Chain Performance Management with Predictive Analytics

    Directory of Open Access Journals (Sweden)

    Nenad Stefanovic

    2014-01-01

    Full Text Available Today’s business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators. Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.

  9. Proactive Supply Chain Performance Management with Predictive Analytics

    Science.gov (United States)

    Stefanovic, Nenad

    2014-01-01

    Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment. PMID:25386605

  10. Proactive supply chain performance management with predictive analytics.

    Science.gov (United States)

    Stefanovic, Nenad

    2014-01-01

    Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.

  11. Prediction of mirror performance from laboratory measurements

    International Nuclear Information System (INIS)

    Church, E.L.; Takacs, P.Z.

    1989-01-01

    This paper describes and illustrates a simple method of predicting the imaging performance of synchrotron mirrors from laboratory measurements of their profiles. It discusses the important role of the transverse coherence length of the incident radiation, the fractal-like form of the mirror roughness, mirror characterization, and the use of closed-form expressions for the predicted image intensities

  12. A model-based analysis of the predictive performance of different renal function markers for cefepime clearance in the ICU

    NARCIS (Netherlands)

    Jonckheere, Stijn; De Neve, Nikolaas; De Beenhouwer, Hans; Berth, Mario; Vermeulen, An; Van Bocxlaer, Jan; Colin, Pieter

    Several population pharmacokinetic models for cefepime in critically ill patients have been described, which all indicate that variability in renal clearance is the main determinant of the observed variability in exposure. The main objective of this study was to determine which renal marker best

  13. Development of a model capable of predicting the performance of piston ring-cylinder liner-like tribological interfaces

    DEFF Research Database (Denmark)

    Felter, C.L.; Vølund, A.; Imran, Tajammal

    2010-01-01

    on a measured temperature only; thus, it is not necessary to include the energy equation. Conservation of oil is ensured throughout the domain by considering the amount of oil outside the lubricated interface. A model for hard contact through asperities is also included. Second, a laboratory-scale test rig...

  14. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  15. Predicting Performance Ratings Using Motivational Antecedents

    National Research Council Canada - National Science Library

    Zazania, Michelle

    1998-01-01

    This research examined the role of motivation in predicting peer and trainer ratings of student performance and contrasted the relative importance of various antecedents for peer and trainer ratings...

  16. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... Linear MPC. 1. Uses linear model: ˙x = Ax + Bu. 2. Quadratic cost function: F = xT Qx + uT Ru. 3. Linear constraints: Hx + Gu < 0. 4. Quadratic program. Nonlinear MPC. 1. Nonlinear model: ˙x = f(x, u). 2. Cost function can be nonquadratic: F = (x, u). 3. Nonlinear constraints: h(x, u) < 0. 4. Nonlinear program.

  17. An empirical/theoretical model with dimensionless numbers to predict the performance of electrodialysis systems on the basis of operating conditions.

    Science.gov (United States)

    Karimi, Leila; Ghassemi, Abbas

    2016-07-01

    Among the different technologies developed for desalination, the electrodialysis/electrodialysis reversal (ED/EDR) process is one of the most promising for treating brackish water with low salinity when there is high risk of scaling. Multiple researchers have investigated ED/EDR to optimize the process, determine the effects of operating parameters, and develop theoretical/empirical models. Previously published empirical/theoretical models have evaluated the effect of the hydraulic conditions of the ED/EDR on the limiting current density using dimensionless numbers. The reason for previous studies' emphasis on limiting current density is twofold: 1) to maximize ion removal, most ED/EDR systems are operated close to limiting current conditions if there is not a scaling potential in the concentrate chamber due to a high concentration of less-soluble salts; and 2) for modeling the ED/EDR system with dimensionless numbers, it is more accurate and convenient to use limiting current density, where the boundary layer's characteristics are known at constant electrical conditions. To improve knowledge of ED/EDR systems, ED/EDR models should be also developed for the Ohmic region, where operation reduces energy consumption, facilitates targeted ion removal, and prolongs membrane life compared to limiting current conditions. In this paper, theoretical/empirical models were developed for ED/EDR performance in a wide range of operating conditions. The presented ion removal and selectivity models were developed for the removal of monovalent ions and divalent ions utilizing the dominant dimensionless numbers obtained from laboratory scale electrodialysis experiments. At any system scale, these models can predict ED/EDR performance in terms of monovalent and divalent ion removal. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Complexity factors and prediction of performance

    International Nuclear Information System (INIS)

    Braarud, Per Oeyvind

    1998-03-01

    Understanding of what makes a control room situation difficult to handle is important when studying operator performance, both with respect to prediction as well as improvement of the human performance. A factor analytic approach identified eight factors from operators' answers to an 39 item questionnaire about complexity of the operator's task in the control room. A Complexity Profiling Questionnaire was developed, based on the factor analytic results from the operators' conception of complexity. The validity of the identified complexity factors was studied by prediction of crew performance and prediction of plant performance from ratings of the complexity of scenarios. The scenarios were rated by both process experts and the operators participating in the scenarios, using the Complexity Profiling Questionnaire. The process experts' complexity ratings predicted both crew performance and plant performance, while the operators' rating predicted plant performance only. The results reported are from initial studies of complexity, and imply a promising potential for further studies of the concept. The approach used in the study as well as the reported results are discussed. A chapter about the structure of the conception of complexity, and a chapter about further research conclude the report. (author)

  19. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  1. Predictive Bias and Sensitivity in NRC Fuel Performance Codes

    Energy Technology Data Exchange (ETDEWEB)

    Geelhood, Kenneth J.; Luscher, Walter G.; Senor, David J.; Cunningham, Mitchel E.; Lanning, Donald D.; Adkins, Harold E.

    2009-10-01

    The latest versions of the fuel performance codes, FRAPCON-3 and FRAPTRAN were examined to determine if the codes are intrinsically conservative. Each individual model and type of code prediction was examined and compared to the data that was used to develop the model. In addition, a brief literature search was performed to determine if more recent data have become available since the original model development for model comparison.

  2. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  3. Predicting Students' Performance in the Senior Secondary ...

    African Journals Online (AJOL)

    cce

    African Journal of Educational Studies in Mathematics and Sciences Vol. 4, 2006. 41. Predicting Students' Performance in the Senior Secondary. Certificate Examinations from Performance in the Junior Secondary. Certificate Examinations in Ondo State, igeria. Adeyemi, T. O.. Department of Educational Foundations and ...

  4. ANN Model for Predicting the Impact of Submerged Aquatic Weeds Existence on the Hydraulic Performance of Branched Open Channel System Accompanied by Water Structures

    International Nuclear Information System (INIS)

    Abdeen, Mostafa A. M.; Abdin, Alla E.

    2007-01-01

    The existence of hydraulic structures in a branched open channel system urges the need for considering the gradually varied flow criterion in evaluating the different hydraulic characteristics in this type of open channel system. Computations of hydraulic characteristics such as flow rates and water surface profiles in branched open channel system with hydraulic structures require tremendous numerical effort especially when the flow cannot be assumed uniform. In addition, the existence of submerged aquatic weeds in this branched open channel system adds to the complexity of the evaluation of the different hydraulic characteristics for this system. However, this existence of aquatic weeds can not be neglected since it is very common in Egyptian open channel systems. Artificial Neural Network (ANN) has been widely utilized in the past decade in civil engineering applications for the simulation and prediction of the different physical phenomena and has proven its capabilities in the different fields. The present study aims towards introducing the use of ANN technique to model and predict the impact of submerged aquatic weeds existence on the hydraulic performance of branched open channel system. Specifically the current paper investigates a branched open channel system that consists of main channel supplies water to two branch channels that are infested by submerged aquatic weeds and have water structures such as clear over fall weirs and sluice gates. The results of this study showed that ANN technique was capable, with small computational effort and high accuracy, of predicting the impact of different infestation percentage for submerged aquatic weeds on the hydraulic performance of branched open channel system with two different hydraulic structures

  5. Predictions models with neural nets

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2008-01-01

    Full Text Available The contribution is oriented to basic problem trends solution of economic pointers, using neural networks. Problems include choice of the suitable model and consequently configuration of neural nets, choice computational function of neurons and the way prediction learning. The contribution contains two basic models that use structure of multilayer neural nets and way of determination their configuration. It is postulate a simple rule for teaching period of neural net, to get most credible prediction.Experiments are executed with really data evolution of exchange rate Kč/Euro. The main reason of choice this time series is their availability for sufficient long period. In carry out of experiments the both given basic kind of prediction models with most frequent use functions of neurons are verified. Achieve prediction results are presented as in numerical and so in graphical forms.

  6. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  7. Event rate and reaction time performance in ADHD: Testing predictions from the state regulation deficit hypothesis using an ex-Gaussian model.

    Science.gov (United States)

    Metin, Baris; Wiersema, Jan R; Verguts, Tom; Gasthuys, Roos; van Der Meere, Jacob J; Roeyers, Herbert; Sonuga-Barke, Edmund

    2014-12-06

    According to the state regulation deficit (SRD) account, ADHD is associated with a problem using effort to maintain an optimal activation state under demanding task settings such as very fast or very slow event rates. This leads to a prediction of disrupted performance at event rate extremes reflected in higher Gaussian response variability that is a putative marker of activation during motor preparation. In the current study, we tested this hypothesis using ex-Gaussian modeling, which distinguishes Gaussian from non-Gaussian variability. Twenty-five children with ADHD and 29 typically developing controls performed a simple Go/No-Go task under four different event-rate conditions. There was an accentuated quadratic relationship between event rate and Gaussian variability in the ADHD group compared to the controls. The children with ADHD had greater Gaussian variability at very fast and very slow event rates but not at moderate event rates. The results provide evidence for the SRD account of ADHD. However, given that this effect did not explain all group differences (some of which were independent of event rate) other cognitive and/or motivational processes are also likely implicated in ADHD performance deficits.

  8. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations ...

  9. Well performance model

    International Nuclear Information System (INIS)

    Thomas, L.K.; Evans, C.E.; Pierson, R.G.; Scott, S.L.

    1992-01-01

    This paper describes the development and application of a comprehensive oil or gas well performance model. The model contains six distinct sections: stimulation design, tubing and/or casing flow, reservoir and near-wellbore calculations, production forecasting, wellbore heat transmission, and economics. These calculations may be performed separately or in an integrated fashion with data and results shared among the different sections. The model analysis allows evaluation of all aspects of well completion design, including the effects on future production and overall well economics

  10. What predicts performance during clinical psychology training?

    Science.gov (United States)

    Scior, Katrina; Bradley, Caroline E; Potts, Henry W W; Woolf, Katherine; de C Williams, Amanda C

    2014-06-01

    While the question of who is likely to be selected for clinical psychology training has been studied, evidence on performance during training is scant. This study explored data from seven consecutive intakes of the UK's largest clinical psychology training course, aiming to identify what factors predict better or poorer outcomes. Longitudinal cross-sectional study using prospective and retrospective data. Characteristics at application were analysed in relation to a range of in-course assessments for 274 trainee clinical psychologists who had completed or were in the final stage of their training. Trainees were diverse in age, pre-training experience, and academic performance at A-level (advanced level certificate required for university admission), but not in gender or ethnicity. Failure rates across the three performance domains (academic, clinical, research) were very low, suggesting that selection was successful in screening out less suitable candidates. Key predictors of good performance on the course were better A-levels and better degree class. Non-white students performed less well on two outcomes. Type and extent of pre-training clinical experience on outcomes had varied effects on outcome. Research supervisor ratings emerged as global indicators and predicted nearly all outcomes, but may have been biased as they were retrospective. Referee ratings predicted only one of the seven outcomes examined, and interview ratings predicted none of the outcomes. Predicting who will do well or poorly in clinical psychology training is complex. Interview and referee ratings may well be successful in screening out unsuitable candidates, but appear to be a poor guide to performance on the course. © 2013 The Authors. British Journal of Clinical Psychology published by John Wiley & Sons Ltd on behalf of the British Psychological Society.

  11. Posterior predictive checking of multiple imputation models.

    Science.gov (United States)

    Nguyen, Cattram D; Lee, Katherine J; Carlin, John B

    2015-07-01

    Multiple imputation is gaining popularity as a strategy for handling missing data, but there is a scarcity of tools for checking imputation models, a critical step in model fitting. Posterior predictive checking (PPC) has been recommended as an imputation diagnostic. PPC involves simulating "replicated" data from the posterior predictive distribution of the model under scrutiny. Model fit is assessed by examining whether the analysis from the observed data appears typical of results obtained from the replicates produced by the model. A proposed diagnostic measure is the posterior predictive "p-value", an extreme value of which (i.e., a value close to 0 or 1) suggests a misfit between the model and the data. The aim of this study was to evaluate the performance of the posterior predictive p-value as an imputation diagnostic. Using simulation methods, we deliberately misspecified imputation models to determine whether posterior predictive p-values were effective in identifying these problems. When estimating the regression parameter of interest, we found that more extreme p-values were associated with poorer imputation model performance, although the results highlighted that traditional thresholds for classical p-values do not apply in this context. A shortcoming of the PPC method was its reduced ability to detect misspecified models with increasing amounts of missing data. Despite the limitations of posterior predictive p-values, they appear to have a valuable place in the imputer's toolkit. In addition to automated checking using p-values, we recommend imputers perform graphical checks and examine other summaries of the test quantity distribution. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Evaluating the Performance of a New Model for Predicting the Growth of Clostridium perfringens in Cooked, Uncured Meat and Poultry Products under Isothermal, Heating, and Dynamically Cooling Conditions.

    Science.gov (United States)

    Huang, Lihan

    2016-07-01

    Clostridium perfringens type A is a significant public health threat and its spores may germinate, outgrow, and multiply during cooling of cooked meats. This study applies a new C. perfringens growth model in the USDA Integrated Pathogen Modeling Program-Dynamic Prediction (IPMP Dynamic Prediction) Dynamic Prediction to predict the growth from spores of C. perfringens in cooked uncured meat and poultry products using isothermal, dynamic heating, and cooling data reported in the literature. The residual errors of predictions (observation-prediction) are analyzed, and the root-mean-square error (RMSE) calculated. For isothermal and heating profiles, each data point in growth curves is compared. The mean residual errors (MRE) of predictions range from -0.40 to 0.02 Log colony forming units (CFU)/g, with a RMSE of approximately 0.6 Log CFU/g. For cooling, the end point predictions are conservative in nature, with an MRE of -1.16 Log CFU/g for single-rate cooling and -0.66 Log CFU/g for dual-rate cooling. The RMSE is between 0.6 and 0.7 Log CFU/g. Compared with other models reported in the literature, this model makes more accurate and fail-safe predictions. For cooling, the percentage for accurate and fail-safe predictions is between 97.6% and 100%. Under criterion 1, the percentage of accurate predictions is 47.5% for single-rate cooling and 66.7% for dual-rate cooling, while the fail-dangerous predictions are between 0% and 2.4%. This study demonstrates that IPMP Dynamic Prediction can be used by food processors and regulatory agencies as a tool to predict the growth of C. perfringens in uncured cooked meats and evaluate the safety of cooked or heat-treated uncured meat and poultry products exposed to cooling deviations or to develop customized cooling schedules. This study also demonstrates the need for more accurate data collection during cooling. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  13. Why Do Spatial Abilities Predict Mathematical Performance?

    Science.gov (United States)

    Tosto, Maria Grazia; Hanscombe, Ken B.; Haworth, Claire M. A.; Davis, Oliver S. P.; Petrill, Stephen A.; Dale, Philip S.; Malykh, Sergey; Plomin, Robert; Kovas, Yulia

    2014-01-01

    Spatial ability predicts performance in mathematics and eventual expertise in science, technology and engineering. Spatial skills have also been shown to rely on neuronal networks partially shared with mathematics. Understanding the nature of this association can inform educational practices and intervention for mathematical underperformance.…

  14. Predictability of steer performance in the feedlot

    African Journals Online (AJOL)

    ance and overall performance is closely correlated. A further advantage would be if the growth and feed conversion-time curves could be predicted with confidence because feedlot management can then optimize duration of feeding periods to cash in on favourable market conditions. This in turn will require particular and ...

  15. PERFORM 60: Prediction of the effects of radiation for reactor pressure vessel and in-core materials using multi-scale modelling - 60 years foreseen plant lifetime

    International Nuclear Information System (INIS)

    Al Mazouzi, A.; Alamo, A.; Lidbury, D.; Moinereau, D.; Van Dyck, S.

    2011-01-01

    Highlights: → Multi-scale and multi-physics modelling are adopted by PERFORM 60 to predict irradiation damage in nuclear structural materials. → PERFORM 60 allows to Consolidate the community and improve the interaction between universities/industries and safety authorities. → Experimental validation at the relevant scale is a key for developing the multi-scale modelling methodology. - Abstract: In nuclear power plants, materials undergo degradation due to severe irradiation conditions that may limit their operational lifetime. Utilities that operate these reactors need to quantify the ageing and potential degradation of certain essential structures of the power plant to ensure their safe and reliable operation. So far, the monitoring and mitigation of these degradation phenomena rely mainly on long-term irradiation programs in test reactors as well as on mechanical or corrosion testing in specialized hot cells. Continuous progress in the physical understanding of the phenomena involved in irradiation damage and progress in computer sciences have now made possible the development of multi-scale numerical tools able to simulate the materials behaviour in a nuclear environment. Indeed, within the PERFECT project of the EURATOM framework program (FP6), a first step has been successfully reached through the development of a simulation platform that contains several advanced numerical tools aiming at the prediction of irradiation damage in both the reactor pressure vessel (RPV) and its internals using available, state-of-the-art-knowledge. These tools allow simulation of irradiation effects on the nanostructure and the constitutive behaviour of the RPV low alloy steels, as well as their fracture mechanics properties. For the more highly irradiated reactor internals, which are commonly produced using austenitic stainless steels, the first partial models were established, describing radiation effects on the nanostructure and providing a first description of the

  16. What predicts performance during clinical psychology training?

    Science.gov (United States)

    Scior, Katrina; Bradley, Caroline E; Potts, Henry W W; Woolf, Katherine; de C Williams, Amanda C

    2014-01-01

    Objectives While the question of who is likely to be selected for clinical psychology training has been studied, evidence on performance during training is scant. This study explored data from seven consecutive intakes of the UK's largest clinical psychology training course, aiming to identify what factors predict better or poorer outcomes. Design Longitudinal cross-sectional study using prospective and retrospective data. Method Characteristics at application were analysed in relation to a range of in-course assessments for 274 trainee clinical psychologists who had completed or were in the final stage of their training. Results Trainees were diverse in age, pre-training experience, and academic performance at A-level (advanced level certificate required for university admission), but not in gender or ethnicity. Failure rates across the three performance domains (academic, clinical, research) were very low, suggesting that selection was successful in screening out less suitable candidates. Key predictors of good performance on the course were better A-levels and better degree class. Non-white students performed less well on two outcomes. Type and extent of pre-training clinical experience on outcomes had varied effects on outcome. Research supervisor ratings emerged as global indicators and predicted nearly all outcomes, but may have been biased as they were retrospective. Referee ratings predicted only one of the seven outcomes examined, and interview ratings predicted none of the outcomes. Conclusions Predicting who will do well or poorly in clinical psychology training is complex. Interview and referee ratings may well be successful in screening out unsuitable candidates, but appear to be a poor guide to performance on the course. Practitioner points While referee and selection interview ratings did not predict performance during training, they may be useful in screening out unsuitable candidates at the application stage High school final academic performance

  17. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  18. Simplified Predictive Models for CO2 Sequestration Performance Assessment: Research Topical Report on Task #4 - Reduced-Order Method (ROM) Based Models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta; Jin, Larry; He, Jincong; Durlofsky, Louis

    2015-06-30

    Reduced-order models provide a means for greatly accelerating the detailed simulations that will be required to manage CO2 storage operations. In this work, we investigate the use of one such method, POD-TPWL, which has previously been shown to be effective in oil reservoir simulation problems. This method combines trajectory piecewise linearization (TPWL), in which the solution to a new (test) problem is represented through a linearization around the solution to a previously-simulated (training) problem, with proper orthogonal decomposition (POD), which enables solution states to be expressed in terms of a relatively small number of parameters. We describe the application of POD-TPWL for CO2-water systems simulated using a compositional procedure. Stanford’s Automatic Differentiation-based General Purpose Research Simulator (AD-GPRS) performs the full-order training simulations and provides the output (derivative matrices and system states) required by the POD-TPWL method. A new POD-TPWL capability introduced in this work is the use of horizontal injection wells that operate under rate (rather than bottom-hole pressure) control. Simulation results are presented for CO2 injection into a synthetic aquifer and into a simplified model of the Mount Simon formation. Test cases involve the use of time-varying well controls that differ from those used in training runs. Results of reasonable accuracy are consistently achieved for relevant well quantities. Runtime speedups of around a factor of 370 relative to full- order AD-GPRS simulations are achieved, though the preprocessing needed for POD-TPWL model construction corresponds to the computational requirements for about 2.3 full-order simulation runs. A preliminary treatment for POD-TPWL modeling in which test cases differ from training runs in terms of geological parameters (rather than well controls) is also presented. Results in this case involve only small differences between

  19. NIF capsule performance modeling

    Directory of Open Access Journals (Sweden)

    Weber S.

    2013-11-01

    Full Text Available Post-shot modeling of NIF capsule implosions was performed in order to validate our physical and numerical models. Cryogenic layered target implosions and experiments with surrogate targets produce an abundance of capsule performance data including implosion velocity, remaining ablator mass, times of peak x-ray and neutron emission, core image size, core symmetry, neutron yield, and x-ray spectra. We have attempted to match the integrated data set with capsule-only simulations by adjusting the drive and other physics parameters within expected uncertainties. The simulations include interface roughness, time-dependent symmetry, and a model of mix. We were able to match many of the measured performance parameters for a selection of shots.

  20. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  1. Model predictive control of smart microgrids

    DEFF Research Database (Denmark)

    Hu, Jiefeng; Zhu, Jianguo; Guerrero, Josep M.

    2014-01-01

    required to realise high-performance of distributed generations and will realise innovative control techniques utilising model predictive control (MPC) to assist in coordinating the plethora of generation and load combinations, thus enable the effective exploitation of the clean renewable energy sources...

  2. Individualized Biomathematical Modeling of Fatigue and Performance

    Science.gov (United States)

    2008-05-29

    waking period are omitted in order to avoid confounds from sleep inertia. Gray bars indicate scheduled sleep periods . (b) Performance predictions...i.e., total sleep deprivation; black). Light gray areas indicate nocturnal sleep periods . In this illustration, the bifurcation point is set to...confounds from sleep inertia. Gray bars indicate scheduled sleep periods . (b) Corresponding performance predictions according to the new model

  3. Performance reliability prediction for thermal aging based on kalman filtering

    International Nuclear Information System (INIS)

    Ren Shuhong; Wen Zhenhua; Xue Fei; Zhao Wensheng

    2015-01-01

    The performance reliability of the nuclear power plant main pipeline that failed due to thermal aging was studied by the performance degradation theory. Firstly, through the data obtained from the accelerated thermal aging experiments, the degradation process of the impact strength and fracture toughness of austenitic stainless steel material of the main pipeline was analyzed. The time-varying performance degradation model based on the state space method was built, and the performance trends were predicted by using Kalman filtering. Then, the multi-parameter and real-time performance reliability prediction model for the main pipeline thermal aging was developed by considering the correlation between the impact properties and fracture toughness, and by using the stochastic process theory. Thus, the thermal aging performance reliability and reliability life of the main pipeline with multi-parameter were obtained, which provides the scientific basis for the optimization management of the aging maintenance decision making for nuclear power plant main pipelines. (authors)

  4. What predicts performance in Canadian dental schools?

    Science.gov (United States)

    Smithers, S; Catano, V M; Cunningham, D P

    2004-06-01

    The task of selecting the best dental applicants out of an extremely competitive applicant pool is a problem faced annually by dental faculties. This study examined the validity of both cognitive and noncognitive factors used for selection to Canadian dental schools. Interest in personality measurement and the prediction offered by personality measures has escalated and may be applied to the selection of dental candidates. Therefore, the study also assessed whether the addition of a personality measure would increase the validity of predicting performance beyond that achieved by an interview and the Dental Aptitude Test. Results suggest that an interview may be useful in identifying specific behavioral characteristics deemed important for success in dental training. Consistent with previous research, results show that the Dental Aptitude Test is a good predictor of preclinical academic success, with prediction declining when clinical components of the program are introduced into the criterion. Results from the personality measure indicated that Openness to Experience was significantly related to aspects of clinical education, although, contrary to expectations, this relationship was negative. A facet of Openness, Ideas, together with Positive Emotions, a facet of Extroversion, improved prediction of performance in clinical studies beyond that provided by the Dental Aptitude Test and the Interview. Implications of the findings are discussed, and recommendations regarding the admission process to Canadian dental programs are offered.

  5. Biomarker case-detection and prediction with potential for functional psychosis screening: development and validation of a model related to biochemistry, sensory neural timing and end organ performance.

    Directory of Open Access Journals (Sweden)

    Stephanie eFryar-Williams

    2016-04-01

    Full Text Available The Mental Health Biomarker Project aimed to discover case-predictive biomarkers for functional psychosis. In a retrospective, cross-sectional study, candidate marker results from 67, highly-characterized symptomatic participants were compared with results from 67 gender and age matched controls. Urine samples were analysed for catecholamines, their metabolites and hydroxylpyrolline-2-one, an oxidative stress marker. Blood samples were analyzed for vitamin and trace element cofactors of enzymes in the catecholamine synthesis and metabolism pathways. Cognitive, auditory and visual processing measures were assessed using a simple 45 minute, office-based procedure. Receiver Operating Curve (ROC and Odds Ratio analysis discovered biomarkers for deficits in folate, vitamin D and B6 and elevations in free copper to zinc ratio, catecholamines and the oxidative stress marker. Deficits were discovered in peripheral visual and auditory end-organ function, intra-cerebral auditory and visual processing speed and dichotic-listening performance. 15 ROC biomarker variables were divided into 5 functional domains. Through a repeated ROC process, individual ROC variables, followed by domains and finally the overall 15 set model, were dichotomously scored and tallied for abnormal results upon which it was found that ≥ 3 out of 5 abnormal domains achieved an AUC of 0.952 with a sensitivity of 84 per cent and a specificity of 90 percent. Six additional middle ear biomarkers in a 21 biomarker set increased sensitivity to 94% percent. Fivefold cross-validation yielded a mean sensitivity of 85% for the 15 biomarker set. Non-parametric regression analysis confirmed that ≥ 3 out of 5 abnormally scored domains predicted > 50% risk of case-ness whilst 4 abnormally-scored domains predicted 88% risk of case-ness and 100% diagnostic certainty was reached when all 5 domains were abnormally scored. These findings require validation in prospective cohorts and other mental

  6. Which method predicts recidivism best?: A comparison of statistical, machine learning, and data mining predictive models

    OpenAIRE

    Tollenaar, N.; van der Heijden, P.G.M.

    2012-01-01

    Using criminal population conviction histories of recent offenders, prediction mod els are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining and machine learning provide an improvement in predictive performance over classical statistical methods, namely logistic regression and linear discrim inant analysis. These models are compared ...

  7. Performance Evaluation of FAO Model for Prediction of Yield Production, Soil Water and Solute Balance under Environmental Stresses (Case Study Winter Wheat

    Directory of Open Access Journals (Sweden)

    V. Rezaverdinejad

    2014-11-01

    Full Text Available In this study, the FAO agro-hydrological model was investigated and evaluated to predict of yield production, soil water and solute balance by winter wheat field data under water and salt stresses. For this purpose, a field experimental was conducted with three salinity levels of irrigation water include: S1, S2 and S3 corresponding to 1.4, 4.5 and 9.6 dS/m, respectively, and four irrigation depth levels include: I1, I2, I3 and I4 corresponding to 50, 75, 100 and 125% of crop water requirement, respectively, for two varieties of winter wheat: Roshan and Ghods, with three replications in an experimental farm of Birjand University for 1384-85 period. Based on results, the mean relative error of the model in yield prediction for Roshan and Ghods were obtained 9.2 and 26.1%, respectively. The maximum error of yield prediction in both of the Roshan and Ghods varieties, were obtained for S1I1, S2I1 and S3I1 treatments. The relative error of Roshan yield prediction for S1I1, S2I1 and S3I1 were calculated 20.0, 28.1 and 26.6%, respectively and for Ghods variety were calculated 61, 94.5 and 99.9%, respectively, that indicated a significant over estimate error under higher water stress. The mean relative error of model for all treatments, in prediction of soil water depletion and electrical conductivity of soil saturation extract, were calculated 7.1 and 5.8%, respectively, that indicated proper accuracy of model in prediction of soil water content and soil salinity.

  8. Evaluation of genome-enabled selection for bacterial cold water disease resistance using progeny performance data in Rainbow Trout: Insights on genotyping methods and genomic prediction models

    Science.gov (United States)

    Bacterial cold water disease (BCWD) causes significant economic losses in salmonid aquaculture, and traditional family-based breeding programs aimed at improving BCWD resistance have been limited to exploiting only between-family variation. We used genomic selection (GS) models to predict genomic br...

  9. Evaluating the performance of a new model for predicting the growth of Clostridium perfringens in cooked, uncured meat and poultry products under isothermal, heating, and dynamically cooling conditions

    Science.gov (United States)

    Clostridium perfringens Type A is a significant public health threat and may germinate, outgrow, and multiply during cooling of cooked meats. This study evaluates a new C. perfringens growth model in IPMP Dynamic Prediction using the same criteria and cooling data in Mohr and others (2015), but inc...

  10. Predictive accuracy of risk factors and markers: a simulation study of the effect of novel markers on different performance measures for logistic regression models.

    Science.gov (United States)

    Austin, Peter C; Steyerberg, Ewout W

    2013-02-20

    The change in c-statistic is frequently used to summarize the change in predictive accuracy when a novel risk factor is added to an existing logistic regression model. We explored the relationship between the absolute change in the c-statistic, Brier score, generalized R(2) , and the discrimination slope when a risk factor was added to an existing model in an extensive set of Monte Carlo simulations. The increase in model accuracy due to the inclusion of a novel marker was proportional to both the prevalence of the marker and to the odds ratio relating the marker to the outcome but inversely proportional to the accuracy of the logistic regression model with the marker omitted. We observed greater improvements in model accuracy when the novel risk factor or marker was uncorrelated with the existing predictor variable compared with when the risk factor has a positive correlation with the existing predictor variable. We illustrated these findings by using a study on mortality prediction in patients hospitalized with heart failure. In conclusion, the increase in predictive accuracy by adding a marker should be considered in the context of the accuracy of the initial model. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Fuzzy predictive filtering in nonlinear economic model predictive control for demand response

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.

    2016-01-01

    The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...

  12. Model description and evaluation of model performance, scenario S. Multiple pathways assessment of the IAEA/CEC co-ordinated research programme on validation of environmental model predictions (VAMP)

    Energy Technology Data Exchange (ETDEWEB)

    Suolanen, V. [VTT Energy, Espoo (Finland). Nuclear Energy

    1996-12-01

    A modelling approach was used to predict doses from a large area deposition of {sup 137}Cs over southern and central Finland. The assumed deposition profile and quantity were both similar to those resulting from the Chernobyl accident. In the study, doses via terrestrial and aquatic environments have been analyzed. Additionally, the intermediate results of the study, such as concentrations in various foodstuffs and the resulting body burdents, were presented. The contributions of ingestion, inhalation and external doses to the total dose were estimated in the study. The considered deposition scenario formed a modelling exercise in the IAEA coordinated research programme on Validation of Environmental Model Predictions, the VAMP project. (21 refs.).

  13. Model description and evaluation of model performance, scenario S. Multiple pathways assessment of the IAEA/CEC co-ordinated research programme on validation of environmental model predictions (VAMP)

    International Nuclear Information System (INIS)

    Suolanen, V.

    1996-12-01

    A modelling approach was used to predict doses from a large area deposition of 137 Cs over southern and central Finland. The assumed deposition profile and quantity were both similar to those resulting from the Chernobyl accident. In the study, doses via terrestrial and aquatic environments have been analyzed. Additionally, the intermediate results of the study, such as concentrations in various foodstuffs and the resulting body burdents, were presented. The contributions of ingestion, inhalation and external doses to the total dose were estimated in the study. The considered deposition scenario formed a modelling exercise in the IAEA coordinated research programme on Validation of Environmental Model Predictions, the VAMP project. (21 refs.)

  14. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....... and controlled have thus become essential factors for efficient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona...

  15. Prediction of Cone Crusher Performance Considering Liner Wear

    Directory of Open Access Journals (Sweden)

    Yanjun Ma

    2016-12-01

    Full Text Available Cone crushers are used in the aggregates and mining industries to crush rock material. The pressure on cone crusher liners is the key factor that influences the hydraulic pressure, power draw and liner wear. In order to dynamically analyze and calculate cone crusher performance along with liner wear, a series of experiments are performed to obtain the crushed rock material samples from a crushing plant at different time intervals. In this study, piston die tests are carried out and a model relating compression coefficient, compression ratio and particle size distribution to a corresponding pressure is presented. On this basis, a new wear prediction model is proposed combining the empirical model for predicting liner wear with time parameter. A simple and practical model, based on the wear model and interparticle breakage, is presented for calculating compression ratio of each crushing zone along with liner wear. Furthermore, the size distribution of the product is calculated based on existing size reduction process model. A method of analysis of product size distribution and shape in the crushing process considering liner wear is proposed. Finally, the validity of the wear model is verified via testing. The result shows that there is a significant improvement of the prediction of cone crusher performance considering liner wear as compared to the previous model.

  16. Radionuclide migration in groundwater at a low-level waste disposal site: a comparison of predictive performance modeling versus field observations

    International Nuclear Information System (INIS)

    Robertson, D.E.; Myers, D.A.; Bergeron, M.P.; Champ, D.R.; Killey, R.W.D.; Moltyaner, G.L.; Young, J.L.

    1985-08-01

    This paper describes a project which is structured to test the concept of modeling a shallow land low-level waste burial site. The project involves a comparison of the actual observed radionuclide migration in groundwaters at a 30-year-old well-monitored field site with the results of predictive transport modeling. The comparison is being conducted as a cooperative program with the Atomic Energy of Canada Ltd. (AECL) at the low-level waste management area at the Chalk River Nuclear Laboratories, Ontario, Canada. A joint PNL-AECL field inviestigation was conducted in 1983 and 1984 to complement the existing extensive data base on actual radionuclide migration. Predictive transport modeling is currently being conducted for this site; first, as if it were a new location being considered for a low-level waste shallow-land burial site and only minimal information about the site were available, and second, utilizing the extensive data base available for the site. The modeling results will then be compared with the empirical observations to provide insight into the level of effort needed to reasonably predict the spacial and temporal movement of radionuclides in the groundwater enviroment. 8 refs., 5 figs.,

  17. Reading Performance Is Predicted by More Than Phonological Processing

    Directory of Open Access Journals (Sweden)

    Michelle Y. Kibby

    2014-09-01

    Full Text Available We compared three phonological processing components (phonological awareness, rapid automatized naming and phonological memory, verbal working memory, and attention control in terms of how well they predict the various aspects of reading: word recognition, pseudoword decoding, fluency and comprehension, in a mixed sample of 182 children ages 8-12 years. Participants displayed a wide range of reading ability and attention control. Multiple regression was used to determine how well the phonological processing components, verbal working memory, and attention control predict reading performance. All equations were highly significant. Phonological memory predicted word identification and decoding. In addition, phonological awareness and rapid automatized naming predicted every aspect of reading assessed, supporting the notion that phonological processing is a core contributor to reading ability. Nonetheless, phonological processing was not the only predictor of reading performance. Verbal working memory predicted fluency, decoding and comprehension, and attention control predicted fluency. Based upon our results, when using Baddeley’s model of working memory it appears that the phonological loop contributes to basic reading ability, whereas the central executive contributes to fluency and comprehension, along with decoding. Attention control was of interest as some children with ADHD have poor reading ability even if it is not sufficiently impaired to warrant diagnosis. Our finding that attention control predicts reading fluency is consistent with prior research which showed sustained attention plays a role in fluency. Taken together, our results suggest that reading is a highly complex skill that entails more than phonological processing to perform well.

  18. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  19. The Search Performance Evaluation and Prediction in Exploratory Search

    OpenAIRE

    LIU, FEI

    2016-01-01

    The exploratory search for complex search tasks requires an effective search behavior model to evaluate and predict user search performance. Few studies have investigated the relationship between user search behavior and search performance in exploratory search. This research adopts a mixed approach combining search system development, user search experiment, search query log analysis, and multivariate regression analysis to resolve the knowledge gap. Through this study, it is shown that expl...

  20. Measuring and Predicting Sleep and Performance During Military Operations

    Science.gov (United States)

    2012-08-23

    stage of sleep. Furthermore, the finding that even relatively brief sleep periods (eg, a 4-hour daily nap following 90 hours of continuous...amounts of SWS obtained. Because normal performance levels are restored recovery sleep periods that include much less sleep time than the amount...Step 1 Step 1 Step 2 work periods sleep periods fatigue level Two-Step Models a b 83 Measuring and Predicting Sleep and Performance During Military

  1. Axisymmetric thrust-vectoring nozzle performance prediction

    International Nuclear Information System (INIS)

    Wilson, E. A.; Adler, D.; Bar-Yoseph, P.Z

    1998-01-01

    Throat-hinged geometrically variable converging-diverging thrust-vectoring nozzles directly affect the jet flow geometry and rotation angle at the nozzle exit as a function of the nozzle geometry, the nozzle pressure ratio and flight velocity. The consideration of nozzle divergence in the effective-geometric nozzle relation is theoretically considered here for the first time. In this study, an explicit calculation procedure is presented as a function of nozzle geometry at constant nozzle pressure ratio, zero velocity and altitude, and compared with experimental results in a civil thrust-vectoring scenario. This procedure may be used in dynamic thrust-vectoring nozzle design performance predictions or analysis for civil and military nozzles as well as in the definition of initial jet flow conditions in future numerical VSTOL/TV jet performance studies

  2. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  3. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  4. Predictive capabilities of various constitutive models for arterial tissue.

    Science.gov (United States)

    Schroeder, Florian; Polzer, Stanislav; Slažanský, Martin; Man, Vojtěch; Skácel, Pavel

    2018-02-01

    Aim of this study is to validate some constitutive models by assessing their capabilities in describing and predicting uniaxial and biaxial behavior of porcine aortic tissue. 14 samples from porcine aortas were used to perform 2 uniaxial and 5 biaxial tensile tests. Transversal strains were furthermore stored for uniaxial data. The experimental data were fitted by four constitutive models: Holzapfel-Gasser-Ogden model (HGO), model based on generalized structure tensor (GST), Four-Fiber-Family model (FFF) and Microfiber model. Fitting was performed to uniaxial and biaxial data sets separately and descriptive capabilities of the models were compared. Their predictive capabilities were assessed in two ways. Firstly each model was fitted to biaxial data and its accuracy (in term of R 2 and NRMSE) in prediction of both uniaxial responses was evaluated. Then this procedure was performed conversely: each model was fitted to both uniaxial tests and its accuracy in prediction of 5 biaxial responses was observed. Descriptive capabilities of all models were excellent. In predicting uniaxial response from biaxial data, microfiber model was the most accurate while the other models showed also reasonable accuracy. Microfiber and FFF models were capable to reasonably predict biaxial responses from uniaxial data while HGO and GST models failed completely in this task. HGO and GST models are not capable to predict biaxial arterial wall behavior while FFF model is the most robust of the investigated constitutive models. Knowledge of transversal strains in uniaxial tests improves robustness of constitutive models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Fully Closed-Loop Multiple Model Probabilistic Predictive Controller Artificial Pancreas Performance in Adolescents and Adults in a Supervised Hotel Setting.

    Science.gov (United States)

    Forlenza, Gregory P; Cameron, Faye M; Ly, Trang T; Lam, David; Howsmon, Daniel P; Baysal, Nihat; Kulina, Georgia; Messer, Laurel; Clinton, Paula; Levister, Camilla; Patek, Stephen D; Levy, Carol J; Wadwa, R Paul; Maahs, David M; Bequette, B Wayne; Buckingham, Bruce A

    2018-04-16

    Initial Food and Drug Administration-approved artificial pancreas (AP) systems will be hybrid closed-loop systems that require prandial meal announcements and will not eliminate the burden of premeal insulin dosing. Multiple model probabilistic predictive control (MMPPC) is a fully closed-loop system that uses probabilistic estimation of meals to allow for automated meal detection. In this study, we describe the safety and performance of the MMPPC system with announced and unannounced meals in a supervised hotel setting. The Android phone-based AP system with remote monitoring was tested for 72 h in six adults and four adolescents across three clinical sites with daily exercise and meal challenges involving both three announced (manual bolus by patient) and six unannounced (no bolus by patient) meals. Safety criteria were predefined. Controller aggressiveness was adapted daily based on prior hypoglycemic events. Mean 24-h continuous glucose monitor (CGM) was 157.4 ± 14.4 mg/dL, with 63.6 ± 9.2% of readings between 70 and 180 mg/dL, 2.9 ± 2.3% of readings 250 mg/dL. Moderate hyperglycemia was relatively common with 24.6 ± 6.2% of readings between 180 and 250 mg/dL, primarily within 3 h after a meal. Overnight mean CGM was 139.6 ± 27.6 mg/dL, with 77.9 ± 16.4% between 70 and 180 mg/dL, 3.0 ± 4.5% 250 mg/dL. Postprandial hyperglycemia was more common for unannounced meals compared with announced meals (4-h postmeal CGM 197.8 ± 44.1 vs. 140.6 ± 35.0 mg/dL; P < 0.001). No participants met safety stopping criteria. MMPPC was safe in a supervised setting despite meal and exercise challenges. Further studies are needed in a less supervised environment.

  6. Link Prediction via Sparse Gaussian Graphical Model

    Directory of Open Access Journals (Sweden)

    Liangliang Zhang

    2016-01-01

    Full Text Available Link prediction is an important task in complex network analysis. Traditional link prediction methods are limited by network topology and lack of node property information, which makes predicting links challenging. In this study, we address link prediction using a sparse Gaussian graphical model and demonstrate its theoretical and practical effectiveness. In theory, link prediction is executed by estimating the inverse covariance matrix of samples to overcome information limits. The proposed method was evaluated with four small and four large real-world datasets. The experimental results show that the area under the curve (AUC value obtained by the proposed method improved by an average of 3% and 12.5% compared to 13 mainstream similarity methods, respectively. This method outperforms the baseline method, and the prediction accuracy is superior to mainstream methods when using only 80% of the training set. The method also provides significantly higher AUC values when using only 60% in Dolphin and Taro datasets. Furthermore, the error rate of the proposed method demonstrates superior performance with all datasets compared to mainstream methods.

  7. Constrained bayesian inference of project performance models

    OpenAIRE

    Sunmola, Funlade

    2013-01-01

    Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...

  8. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  9. Predicting Performance of a Face Recognition System Based on Image Quality

    NARCIS (Netherlands)

    Dutta, A.

    2015-01-01

    In this dissertation, we focus on several aspects of models that aim to predict performance of a face recognition system. Performance prediction models are commonly based on the following two types of performance predictor features: a) image quality features; and b) features derived solely from

  10. Advances in HTGR fuel performance models

    International Nuclear Information System (INIS)

    Stansfield, O.M.; Goodin, D.T.; Hanson, D.L.; Turner, R.F.

    1985-01-01

    Advances in HTGR fuel performance models have improved the agreement between observed and predicted performance and contributed to an enhanced position of the HTGR with regard to investment risk and passive safety. Heavy metal contamination is the source of about 55% of the circulating activity in the HTGR during normal operation, and the remainder comes primarily from particles which failed because of defective or missing buffer coatings. These failed particles make up about 5 x 10 -4 fraction of the total core inventory. In addition to prediction of fuel performance during normal operation, the models are used to determine fuel failure and fission product release during core heat-up accident conditions. The mechanistic nature of the models, which incorporate all important failure modes, permits the prediction of performance from the relatively modest accident temperatures of a passively safe HTGR to the much more severe accident conditions of the larger 2240-MW/t HTGR. (author)

  11. Statistical-learning strategies generate only modestly performing predictive models for urinary symptoms following external beam radiotherapy of the prostate: A comparison of conventional and machine-learning methods

    International Nuclear Information System (INIS)

    Yahya, Noorazrul; Ebert, Martin A.; Bulsara, Max; House, Michael J.; Kennedy, Angel; Joseph, David J.; Denham, James W.

    2016-01-01

    Purpose: Given the paucity of available data concerning radiotherapy-induced urinary toxicity, it is important to ensure derivation of the most robust models with superior predictive performance. This work explores multiple statistical-learning strategies for prediction of urinary symptoms following external beam radiotherapy of the prostate. Methods: The performance of logistic regression, elastic-net, support-vector machine, random forest, neural network, and multivariate adaptive regression splines (MARS) to predict urinary symptoms was analyzed using data from 754 participants accrued by TROG03.04-RADAR. Predictive features included dose-surface data, comorbidities, and medication-intake. Four symptoms were analyzed: dysuria, haematuria, incontinence, and frequency, each with three definitions (grade ≥ 1, grade ≥ 2 and longitudinal) with event rate between 2.3% and 76.1%. Repeated cross-validations producing matched models were implemented. A synthetic minority oversampling technique was utilized in endpoints with rare events. Parameter optimization was performed on the training data. Area under the receiver operating characteristic curve (AUROC) was used to compare performance using sample size to detect differences of ≥0.05 at the 95% confidence level. Results: Logistic regression, elastic-net, random forest, MARS, and support-vector machine were the highest-performing statistical-learning strategies in 3, 3, 3, 2, and 1 endpoints, respectively. Logistic regression, MARS, elastic-net, random forest, neural network, and support-vector machine were the best, or were not significantly worse than the best, in 7, 7, 5, 5, 3, and 1 endpoints. The best-performing statistical model was for dysuria grade ≥ 1 with AUROC ± standard deviation of 0.649 ± 0.074 using MARS. For longitudinal frequency and dysuria grade ≥ 1, all strategies produced AUROC>0.6 while all haematuria endpoints and longitudinal incontinence models produced AUROC<0.6. Conclusions

  12. Statistical-learning strategies generate only modestly performing predictive models for urinary symptoms following external beam radiotherapy of the prostate: A comparison of conventional and machine-learning methods

    Energy Technology Data Exchange (ETDEWEB)

    Yahya, Noorazrul, E-mail: noorazrul.yahya@research.uwa.edu.au [School of Physics, University of Western Australia, Western Australia 6009, Australia and School of Health Sciences, National University of Malaysia, Bangi 43600 (Malaysia); Ebert, Martin A. [School of Physics, University of Western Australia, Western Australia 6009, Australia and Department of Radiation Oncology, Sir Charles Gairdner Hospital, Western Australia 6008 (Australia); Bulsara, Max [Institute for Health Research, University of Notre Dame, Fremantle, Western Australia 6959 (Australia); House, Michael J. [School of Physics, University of Western Australia, Western Australia 6009 (Australia); Kennedy, Angel [Department of Radiation Oncology, Sir Charles Gairdner Hospital, Western Australia 6008 (Australia); Joseph, David J. [Department of Radiation Oncology, Sir Charles Gairdner Hospital, Western Australia 6008, Australia and School of Surgery, University of Western Australia, Western Australia 6009 (Australia); Denham, James W. [School of Medicine and Public Health, University of Newcastle, New South Wales 2308 (Australia)

    2016-05-15

    Purpose: Given the paucity of available data concerning radiotherapy-induced urinary toxicity, it is important to ensure derivation of the most robust models with superior predictive performance. This work explores multiple statistical-learning strategies for prediction of urinary symptoms following external beam radiotherapy of the prostate. Methods: The performance of logistic regression, elastic-net, support-vector machine, random forest, neural network, and multivariate adaptive regression splines (MARS) to predict urinary symptoms was analyzed using data from 754 participants accrued by TROG03.04-RADAR. Predictive features included dose-surface data, comorbidities, and medication-intake. Four symptoms were analyzed: dysuria, haematuria, incontinence, and frequency, each with three definitions (grade ≥ 1, grade ≥ 2 and longitudinal) with event rate between 2.3% and 76.1%. Repeated cross-validations producing matched models were implemented. A synthetic minority oversampling technique was utilized in endpoints with rare events. Parameter optimization was performed on the training data. Area under the receiver operating characteristic curve (AUROC) was used to compare performance using sample size to detect differences of ≥0.05 at the 95% confidence level. Results: Logistic regression, elastic-net, random forest, MARS, and support-vector machine were the highest-performing statistical-learning strategies in 3, 3, 3, 2, and 1 endpoints, respectively. Logistic regression, MARS, elastic-net, random forest, neural network, and support-vector machine were the best, or were not significantly worse than the best, in 7, 7, 5, 5, 3, and 1 endpoints. The best-performing statistical model was for dysuria grade ≥ 1 with AUROC ± standard deviation of 0.649 ± 0.074 using MARS. For longitudinal frequency and dysuria grade ≥ 1, all strategies produced AUROC>0.6 while all haematuria endpoints and longitudinal incontinence models produced AUROC<0.6. Conclusions

  13. The applicability and limitations of the geochemical models and tools used in simulating radionuclide behaviour in natural waters. Lessons learned from the Blind Predictive Modelling exercises performed in conjunction with Natural Analogue studies

    Energy Technology Data Exchange (ETDEWEB)

    Bruno, J.; Duro, L.; Grive, M. [QuantiSci SL, Parc Tecnologic del Valles (Spain)

    2001-07-01

    One of the key applications of Natural Analogue studies to the Performance Assessment (PA) of nuclear waste disposal has been the possibility to test the geochemical models and tools to be used in describing the migration of radionuclides in a future radioactive waste repository system. To this end, several geochemical modelling testing exercises (commonly denoted as Blind Predictive Modelling), have formed an integral part of Natural Analogue Studies over the last decade. Consequently, we thought that this is a timely occasion to make an evaluation of the experience gained and lessons learnt. We have reviewed, discussed and compared the results obtained from the Blind Prediction Modelling (BPM) exercises carried out within 7 Natural Analogue Studies: Oman, Pocos de Caldas, Cigar Lake, Maqarin, El Berrocal, Oklo and Palmottu. To make this comparison meaningful, we present the main geochemical characteristics of each site in order to highlight the most relevant mineralogical and hydrochemical differences. From the complete list of elements studied at all the investigated sites we have made a selection based on the relevance of a given element from a PA viewpoint and on the frequency this element has been included in the BPM exercises. The elements selected for discussion are: Sr, Ba, Sn, Pb, Se, Ni, Zn, REEs, Th and U. We have based our discussion on the results obtained from the speciation as well as solubility calculations. From the comparison of the results it is concluded that we can differentiate between three element categories: 1. Elements whose geochemical behaviour can be fairly well described by assuming solubility control exerted by pure solid phases of the given element (i.e. Th, U under reducing conditions and U in some sites under oxidising conditions); 2. Elements for which the association to major geochemical components of the system must be considered in order to explain their concentrations in groundwaters (i.e. Sr, Ba, Zn, Se, REEs and U under

  14. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  15. Evaluating the performance of an integrated CALPUFF-MM5 modeling system for predicting SO{sub 2} emission from a refinery

    Energy Technology Data Exchange (ETDEWEB)

    Abdul-Wahab, Sabah Ahmed [Sultan Qaboos University, Department of Mechanical and Industrial Engineering, College of Engineering, Muscat (Oman); Ali, Sappurd [National Engineering and Scientific Commission (NESCOM), Islamabad (Pakistan); Sardar, Sabir; Irfan, Naseem [Pakistan Institute of Engineering and Applied Sciences (PIEAS), Islamabad (Pakistan); Al-Damkhi, Ali [Public Authority for Applied Education and Training (PAAET), Department of Environmental Sciences College of Health Sciences, Salmiyah (Kuwait)

    2011-12-15

    Oil refineries are one of the proven sources of environmental pollution as they emit more than 100 chemicals into the atmosphere including sulfur dioxide (SO{sub 2}). The dispersion patterns of SO{sub 2} from emissions of Sohar refinery was simulated by employing California Puff (CALPUFF) model integrated with state of the art meteorological Mesoscale Model (MM5). The results of this simulation were used to quantify the ground level concentrations of SO{sub 2} in and around the refinery. The evaluation of the CALPUFF and MM5 modeling system was carried out by comparing the estimated results with that of observed data of the same area. The predicted concentrations of SO{sub 2} agreed well with the observed data, with minor differences in magnitudes. In addition, the ambient air quality of the area was checked by comparing the model results with the regulatory limits for SO{sub 2} set by the Ministry of Environment and Climate Affairs (MECA) in Oman. From the analysis of results, it was found that the concentration of SO{sub 2} in the nearby communities of Sohar refinery is well within the regulatory limits specified by MECA. Based on these results, it was concluded that no health risk, due to SO{sub 2} emissions, is present in areas adjacent to the refinery. (orig.)

  16. Modular Resource Centric Learning for Workflow Performance Prediction

    OpenAIRE

    Singh, Alok; Nguyen, Mai; Purawat, Shweta; Crawl, Daniel; Altintas, Ilkay

    2017-01-01

    Workflows provide an expressive programming model for fine-grained control of large-scale applications in distributed computing environments. Accurate estimates of complex workflow execution metrics on large-scale machines have several key advantages. The performance of scheduling algorithms that rely on estimates of execution metrics degrades when the accuracy of predicted execution metrics decreases. This in-progress paper presents a technique being developed to improve the accuracy of pred...

  17. A predictive analytic model for high-performance tunneling field-effect transistors approaching non-equilibrium Green's function simulations

    International Nuclear Information System (INIS)

    Salazar, Ramon B.; Appenzeller, Joerg; Ilatikhameneh, Hesameddin; Rahman, Rajib; Klimeck, Gerhard

    2015-01-01

    A new compact modeling approach is presented which describes the full current-voltage (I-V) characteristic of high-performance (aggressively scaled-down) tunneling field-effect-transistors (TFETs) based on homojunction direct-bandgap semiconductors. The model is based on an analytic description of two key features, which capture the main physical phenomena related to TFETs: (1) the potential profile from source to channel and (2) the elliptic curvature of the complex bands in the bandgap region. It is proposed to use 1D Poisson's equations in the source and the channel to describe the potential profile in homojunction TFETs. This allows to quantify the impact of source/drain doping on device performance, an aspect usually ignored in TFET modeling but highly relevant in ultra-scaled devices. The compact model is validated by comparison with state-of-the-art quantum transport simulations using a 3D full band atomistic approach based on non-equilibrium Green's functions. It is shown that the model reproduces with good accuracy the data obtained from the simulations in all regions of operation: the on/off states and the n/p branches of conduction. This approach allows calculation of energy-dependent band-to-band tunneling currents in TFETs, a feature that allows gaining deep insights into the underlying device physics. The simplicity and accuracy of the approach provide a powerful tool to explore in a quantitatively manner how a wide variety of parameters (material-, size-, and/or geometry-dependent) impact the TFET performance under any bias conditions. The proposed model presents thus a practical complement to computationally expensive simulations such as the 3D NEGF approach

  18. Predictive Models for Carcinogenicity and Mutagenicity ...

    Science.gov (United States)

    Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t

  19. Numerical simulation of a twin screw expander for performance prediction

    Science.gov (United States)

    Papes, Iva; Degroote, Joris; Vierendeels, Jan

    2015-08-01

    With the increasing use of twin screw expanders in waste heat recovery applications, the performance prediction of these machines plays an important role. This paper presents a mathematical model for calculating the performance of a twin screw expander. From the mass and energy conservation laws, differential equations are derived which are then solved together with the appropriate Equation of State in the instantaneous control volumes. Different flow processes that occur inside the screw expander such as filling (accompanied by a substantial pressure loss) and leakage flows through the clearances are accounted for in the model. The mathematical model employs all geometrical parameters such as chamber volume, suction and leakage areas. With R245fa as working fluid, the Aungier Redlich-Kwong Equation of State has been used in order to include real gas effects. To calculate the mass flow rates through the leakage paths formed inside the screw expander, flow coefficients are considered as constant and they are derived from 3D Computational Fluid Dynamic calculations at given working conditions and applied to all other working conditions. The outcome of the mathematical model is the P-V indicator diagram which is compared to CFD results of the same twin screw expander. Since CFD calculations require significant computational time, developed mathematical model can be used for the faster performance prediction.

  20. Reduced order modelling and predictive control of multivariable ...

    Indian Academy of Sciences (India)

    Anuj Abraham

    2018-03-16

    Mar 16, 2018 ... The performance of constraint generalized predictive control scheme is found to be superior to that of the conventional PID controller in terms of overshoot, settling time and performance indices, mainly ISE, IAE and MSE. Keywords. Predictive control; distillation column; reduced order model; dominant pole; ...

  1. Changes in Memory Prediction Accuracy: Age and Performance Effects

    Science.gov (United States)

    Pearman, Ann; Trujillo, Amanda

    2013-01-01

    Memory performance predictions are subjective estimates of possible memory task performance. The purpose of this study was to examine possible factors related to changes in word list performance predictions made by younger and older adults. Factors included memory self-efficacy, actual performance, and perceptions of performance. The current study…

  2. Performance of the SMD and SM8 models for predicting solvation free energy of neutral solutes in methanol, dimethyl sulfoxide and acetonitrile.

    Science.gov (United States)

    Zanith, Caroline C; Pliego, Josefredo R

    2015-03-01

    The continuum solvation models SMD and SM8 were developed using 2,346 solvation free energy values for 318 neutral molecules in 91 solvents as reference. However, no solvation data of neutral solutes in methanol was used in the parametrization, while only few solvation free energy values of solutes in dimethyl sulfoxide and acetonitrile were used. In this report, we have tested the performance of the models for these important solvents. Taking data from literature, we have generated solvation free energy, enthalpy and entropy values for 37 solutes in methanol, 21 solutes in dimethyl sulfoxide and 19 solutes in acetonitrile. Both SMD and SM8 models have presented a good performance in methanol and acetonitrile, with mean unsigned error equal or less than 0.66 and 0.55 kcal mol(-1) in methanol and acetonitrile, respectively. However, the correlation is worse in dimethyl sulfoxide, where the SMD and SM8 methods present mean unsigned error of 1.02 and 0.95 kcal mol(-1), respectively. Our results point out the SMx family of models need be improved for dimethyl sulfoxide solvent.

  3. Performance of the SMD and SM8 models for predicting solvation free energy of neutral solutes in methanol, dimethyl sulfoxide and acetonitrile

    Science.gov (United States)

    Zanith, Caroline C.; Pliego, Josefredo R.

    2015-03-01

    The continuum solvation models SMD and SM8 were developed using 2,346 solvation free energy values for 318 neutral molecules in 91 solvents as reference. However, no solvation data of neutral solutes in methanol was used in the parametrization, while only few solvation free energy values of solutes in dimethyl sulfoxide and acetonitrile were used. In this report, we have tested the performance of the models for these important solvents. Taking data from literature, we have generated solvation free energy, enthalpy and entropy values for 37 solutes in methanol, 21 solutes in dimethyl sulfoxide and 19 solutes in acetonitrile. Both SMD and SM8 models have presented a good performance in methanol and acetonitrile, with mean unsigned error equal or less than 0.66 and 0.55 kcal mol-1 in methanol and acetonitrile, respectively. However, the correlation is worse in dimethyl sulfoxide, where the SMD and SM8 methods present mean unsigned error of 1.02 and 0.95 kcal mol-1, respectively. Our results point out the SMx family of models need be improved for dimethyl sulfoxide solvent.

  4. comparative analysis of two mathematical models for prediction

    African Journals Online (AJOL)

    Abstract. A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data ob- tained from experimental work done in this study. The models used are Scheffes and Osadebes optimization theories to predict the compressive strength of ...

  5. Model predictive control of a 3-DOF helicopter system using ...

    African Journals Online (AJOL)

    ... by simulation, and its performance is compared with that achieved by linear model predictive control (LMPC). Keywords: nonlinear systems, helicopter dynamics, MIMO systems, model predictive control, successive linearization. International Journal of Engineering, Science and Technology, Vol. 2, No. 10, 2010, pp. 9-19 ...

  6. Comparative Analysis of Two Mathematical Models for Prediction of ...

    African Journals Online (AJOL)

    A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data obtained from experimental work done in this study. The models used are Scheffe's and Osadebe's optimization theories to predict the compressive strength of sandcrete ...

  7. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  8. Advanced wet--dry cooling tower concept performance prediction

    Energy Technology Data Exchange (ETDEWEB)

    Snyder, T.; Bentley, J.; Giebler, M.; Glicksman, L.R.; Rohsenow, W.M.

    1977-01-01

    The purpose of this year's work has been to test and analyze the new dry cooling tower surface previously developed. The model heat transfer test apparatus built last year has been instrumented for temperature, humidity and flow measurement and performance has been measured under a variety of operating conditions. Tower Tests showed approximately 40 to 50% of the total energy transfer as taking place due to evaporation. This can be compared to approximately 80 to 85% for a conventional wet cooling tower. Comparison of the model tower test results with those of a computer simulation has demonstrated the validity of that simulation and its use as a design tool. Computer predictions have been made for a full-size tower system operating at several locations. Experience with this counterflow model tower has suggested that several design problems may be avoided by blowing the cooling air horizontally through the packing section. This crossflow concept was built from the previous counterflow apparatus and included the design and fabrication of new packing plates. Instrumentation and testing of the counterflow model produced data with an average experimental error of 10%. These results were compared to the predictions of a computer model written for the crossflow configuration. In 14 test runs the predicted total heat transfer differed from the measured total heat transfer by no more than 8% with most runs coming well within 5%. With the computer analogy's validity established, it may now be used to help predict the performance of fullscale wet-dry towers.

  9. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  10. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  11. Methodologies for predicting the part-load performance of aero-derivative gas turbines

    DEFF Research Database (Denmark)

    Haglind, Fredrik; Elmegaard, Brian

    2009-01-01

    Prediction of the part-load performance of gas turbines is advantageous in various applications. Sometimes reasonable part-load performance is sufficient, while in other cases complete agreement with the performance of an existing machine is desirable. This paper is aimed at providing some guidance...... on methodologies for predicting part-load performance of aero-derivative gas turbines. Two different design models – one simple and one more complex – are created. Subsequently, for each of these models, the part-load performance is predicted using component maps and turbine constants, respectively. Comparisons...... with manufacturer data are made. With respect to the design models, the simple model, featuring a compressor, combustor and turbines, results in equally good performance prediction in terms of thermal efficiency and exhaust temperature as does a more complex model. As for part-load predictions, the results suggest...

  12. Performance predictions and manufacturing concerns of burnable poison rods

    International Nuclear Information System (INIS)

    Copeland, R.A.; Buescher, B.J.

    1977-01-01

    Burnable poison rods for reactors designed by B and W consist of low density pellets, composed of boron carbide dispersed in an alumina matrix (Al 2 O 3 --B 4 C), which are contained in Zircaloy-4 tubes. To predict reliable operation of these rods, the irradiation behavior of the components must be known. Performance models were developed based on experimental irradiation data. During rod fabrication, care must be taken to limit the amount of hydrogen in the rod because of the propensity of Zircaloy to hydride in the presence of high levels of hydrogen. Furthermore, the hygroscopic nature of alumina dictates that care must be taken to avoid moisture (a primary source of hydrogen) in the rods. Manufacturing and quality testing procedures have been developed to provide conformance to the design criteria. Examinations have been performed on irradiated burnable poison rods which verify the adequacy of both performance models and manufacturing procedures

  13. Genomic Prediction of Testcross Performance in Canola (Brassica napus).

    Science.gov (United States)

    Jan, Habib U; Abbadi, Amine; Lücke, Sophie; Nichols, Richard A; Snowdon, Rod J

    2016-01-01

    Genomic selection (GS) is a modern breeding approach where genome-wide single-nucleotide polymorphism (SNP) marker profiles are simultaneously used to estimate performance of untested genotypes. In this study, the potential of genomic selection methods to predict testcross performance for hybrid canola breeding was applied for various agronomic traits based on genome-wide marker profiles. A total of 475 genetically diverse spring-type canola pollinator lines were genotyped at 24,403 single-copy, genome-wide SNP loci. In parallel, the 950 F1 testcross combinations between the pollinators and two representative testers were evaluated for a number of important agronomic traits including seedling emergence, days to flowering, lodging, oil yield and seed yield along with essential seed quality characters including seed oil content and seed glucosinolate content. A ridge-regression best linear unbiased prediction (RR-BLUP) model was applied in combination with 500 cross-validations for each trait to predict testcross performance, both across the whole population as well as within individual subpopulations or clusters, based solely on SNP profiles. Subpopulations were determined using multidimensional scaling and K-means clustering. Genomic prediction accuracy across the whole population was highest for seed oil content (0.81) followed by oil yield (0.75) and lowest for seedling emergence (0.29). For seed yieId, seed glucosinolate, lodging resistance and days to onset of flowering (DTF), prediction accuracies were 0.45, 0.61, 0.39 and 0.56, respectively. Prediction accuracies could be increased for some traits by treating subpopulations separately; a strategy which only led to moderate improvements for some traits with low heritability, like seedling emergence. No useful or consistent increase in accuracy was obtained by inclusion of a population substructure covariate in the model. Testcross performance prediction using genome-wide SNP markers shows considerable

  14. Genomic Prediction of Testcross Performance in Canola (Brassica napus)

    Science.gov (United States)

    Jan, Habib U.; Abbadi, Amine; Lücke, Sophie; Nichols, Richard A.; Snowdon, Rod J.

    2016-01-01

    Genomic selection (GS) is a modern breeding approach where genome-wide single-nucleotide polymorphism (SNP) marker profiles are simultaneously used to estimate performance of untested genotypes. In this study, the potential of genomic selection methods to predict testcross performance for hybrid canola breeding was applied for various agronomic traits based on genome-wide marker profiles. A total of 475 genetically diverse spring-type canola pollinator lines were genotyped at 24,403 single-copy, genome-wide SNP loci. In parallel, the 950 F1 testcross combinations between the pollinators and two representative testers were evaluated for a number of important agronomic traits including seedling emergence, days to flowering, lodging, oil yield and seed yield along with essential seed quality characters including seed oil content and seed glucosinolate content. A ridge-regression best linear unbiased prediction (RR-BLUP) model was applied in combination with 500 cross-validations for each trait to predict testcross performance, both across the whole population as well as within individual subpopulations or clusters, based solely on SNP profiles. Subpopulations were determined using multidimensional scaling and K-means clustering. Genomic prediction accuracy across the whole population was highest for seed oil content (0.81) followed by oil yield (0.75) and lowest for seedling emergence (0.29). For seed yieId, seed glucosinolate, lodging resistance and days to onset of flowering (DTF), prediction accuracies were 0.45, 0.61, 0.39 and 0.56, respectively. Prediction accuracies could be increased for some traits by treating subpopulations separately; a strategy which only led to moderate improvements for some traits with low heritability, like seedling emergence. No useful or consistent increase in accuracy was obtained by inclusion of a population substructure covariate in the model. Testcross performance prediction using genome-wide SNP markers shows considerable

  15. Genomic Prediction of Testcross Performance in Canola (Brassica napus.

    Directory of Open Access Journals (Sweden)

    Habib U Jan

    Full Text Available Genomic selection (GS is a modern breeding approach where genome-wide single-nucleotide polymorphism (SNP marker profiles are simultaneously used to estimate performance of untested genotypes. In this study, the potential of genomic selection methods to predict testcross performance for hybrid canola breeding was applied for various agronomic traits based on genome-wide marker profiles. A total of 475 genetically diverse spring-type canola pollinator lines were genotyped at 24,403 single-copy, genome-wide SNP loci. In parallel, the 950 F1 testcross combinations between the pollinators and two representative testers were evaluated for a number of important agronomic traits including seedling emergence, days to flowering, lodging, oil yield and seed yield along with essential seed quality characters including seed oil content and seed glucosinolate content. A ridge-regression best linear unbiased prediction (RR-BLUP model was applied in combination with 500 cross-validations for each trait to predict testcross performance, both across the whole population as well as within individual subpopulations or clusters, based solely on SNP profiles. Subpopulations were determined using multidimensional scaling and K-means clustering. Genomic prediction accuracy across the whole population was highest for seed oil content (0.81 followed by oil yield (0.75 and lowest for seedling emergence (0.29. For seed yieId, seed glucosinolate, lodging resistance and days to onset of flowering (DTF, prediction accuracies were 0.45, 0.61, 0.39 and 0.56, respectively. Prediction accuracies could be increased for some traits by treating subpopulations separately; a strategy which only led to moderate improvements for some traits with low heritability, like seedling emergence. No useful or consistent increase in accuracy was obtained by inclusion of a population substructure covariate in the model. Testcross performance prediction using genome-wide SNP markers shows

  16. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  17. Real-time Tsunami Inundation Prediction Using High Performance Computers

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    earthquake occurs took about 2 minutes, which would be sufficient for a practical tsunami inundation predictions. In the presentation, the computational performance of our faster-than-real-time tsunami inundation model will be shown, and preferable tsunami wave source analysis for an accurate inundation prediction will also be discussed.

  18. Prediction of Chemical Function: Model Development and ...

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  19. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  20. PERFORM 60 - Prediction of the effects of radiation for reactor pressure vessel and in-core materials using multi-scale modelling - 60 years foreseen plant lifetime

    Science.gov (United States)

    Leclercq, Sylvain; Lidbury, David; Van Dyck, Steven; Moinereau, Dominique; Alamo, Ana; Mazouzi, Abdou Al

    2010-11-01

    In nuclear power plants, materials may undergo degradation due to severe irradiation conditions that may limit their operational life. Utilities that operate these reactors need to quantify the ageing and the potential degradations of some essential structures of the power plant to ensure safe and reliable plant operation. So far, the material databases needed to take account of these degradations in the design and safe operation of installations mainly rely on long-term irradiation programs in test reactors as well as on mechanical or corrosion testing in specialized hot cells. Continuous progress in the physical understanding of the phenomena involved in irradiation damage and continuous progress in computer sciences have now made possible the development of multi-scale numerical tools able to simulate the effects of irradiation on materials microstructure. A first step towards this goal has been successfully reached through the development of the RPV-2 and Toughness Module numerical tools by the scientific community created around the FP6 PERFECT project. These tools allow to simulate irradiation effects on the constitutive behaviour of the reactor pressure vessel low alloy steel, and also on its failure properties. Relying on the existing PERFECT Roadmap, the 4 years Collaborative Project PERFORM 60 has mainly for objective to develop multi-scale tools aimed at predicting the combined effects of irradiation and corrosion on internals (austenitic stainless steels) and also to improve existing ones on RPV (bainitic steels). PERFORM 60 is based on two technical sub-projects: (i) RPV and (ii) internals. In addition to these technical sub-projects, the Users' Group and Training sub-project shall allow representatives of constructors, utilities, research organizations… from Europe, USA and Japan to receive the information and training to get their own appraisal on limits and potentialities of the developed tools. An important effort will also be made to teach young

  1. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  2. Stochastic Prediction of Ventilation System Performance

    DEFF Research Database (Denmark)

    Haghighat, F.; Brohus, Henrik; Frier, Christian

    The paper briefly reviews the existing techniques for predicting the airflow rate due to the random nature of forcing functions, e.g. wind speed. The effort is to establish the relationship between the statistics of the output of a system and the statistics of the random input variables and param......The paper briefly reviews the existing techniques for predicting the airflow rate due to the random nature of forcing functions, e.g. wind speed. The effort is to establish the relationship between the statistics of the output of a system and the statistics of the random input variables...

  3. Predicting work Performance through selection interview ratings and Psychological assessment

    Directory of Open Access Journals (Sweden)

    Liziwe Nzama

    2008-11-01

    Full Text Available The aim of the study was to establish whether selection interviews used in conjunction with psychological assessments of personality traits and cognitive functioning contribute to predicting work performance. The sample consisted of 102 managers who were appointed recently in a retail organisation. The independent variables were selection interview ratings obtained on the basis of structured competency-based interview schedules by interviewing panels, fve broad dimensions of personality defned by the Five Factor Model as measured by the 15 Factor Questionnaire (15FQ+, and cognitive processing variables (current level of work, potential level of work, and 12 processing competencies measured by the Cognitive Process Profle (CPP. Work performance was measured through annual performance ratings that focused on measurable outputs of performance objectives. Only two predictor variables correlated statistically signifcantly with the criterion variable, namely interview ratings (r = 0.31 and CPP Verbal Abstraction (r = 0.34. Following multiple regression, only these variables contributed signifcantly to predicting work performance, but only 17.8% of the variance of the criterion was accounted for.

  4. Cognitive load predicts point-of-care ultrasound simulator performance.

    Science.gov (United States)

    Aldekhyl, Sara; Cavalcanti, Rodrigo B; Naismith, Laura M

    2018-02-01

    The ability to maintain good performance with low cognitive load is an important marker of expertise. Incorporating cognitive load measurements in the context of simulation training may help to inform judgements of competence. This exploratory study investigated relationships between demographic markers of expertise, cognitive load measures, and simulator performance in the context of point-of-care ultrasonography. Twenty-nine medical trainees and clinicians at the University of Toronto with a range of clinical ultrasound experience were recruited. Participants answered a demographic questionnaire then used an ultrasound simulator to perform targeted scanning tasks based on clinical vignettes. Participants were scored on their ability to both acquire and interpret ultrasound images. Cognitive load measures included participant self-report, eye-based physiological indices, and behavioural measures. Data were analyzed using a multilevel linear modelling approach, wherein observations were clustered by participants. Experienced participants outperformed novice participants on ultrasound image acquisition. Ultrasound image interpretation was comparable between the two groups. Ultrasound image acquisition performance was predicted by level of training, prior ultrasound training, and cognitive load. There was significant convergence between cognitive load measurement techniques. A marginal model of ultrasound image acquisition performance including prior ultrasound training and cognitive load as fixed effects provided the best overall fit for the observed data. In this proof-of-principle study, the combination of demographic and cognitive load measures provided more sensitive metrics to predict ultrasound simulator performance. Performance assessments which include cognitive load can help differentiate between levels of expertise in simulation environments, and may serve as better predictors of skill transfer to clinical practice.

  5. Cognitive performance modeling based on general systems performance theory.

    Science.gov (United States)

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  6. Performance prediction of rotary compressor with hydrocarbon refrigerant mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Park, M.W.; Chung, Y.G. [Hanyang University Graduate School, Seoul (Korea); Park, K.W. [LG Industrial System Corporation Limited (Korea); Park, H.Y. [Hanyang University, Seoul (Korea)

    1999-04-01

    This paper presents the modeling approach that can be predicted transient behavior of rotary compressor. Mass and energy conservation laws are applied to the control volume, and real gas state equation is used to obtain thermodynamic properties of refrigerant. The valve equation is solved to analyze discharge process also. Dynamic analysis of vane and roller is carried out to gain friction work. From above modeling, the performance of rotary compressor with radial clearance and friction loss is investigated numerically. The performance of each refrigerant and the possibility of using the hydrocarbon refrigerant mixtures in an existing rotary compressor are estimated by applying R12, R134a, R290/R600a mixture also. (author). 6 refs., 13 figs., 1 tab.

  7. Web tools for predictive toxicology model building.

    Science.gov (United States)

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  8. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs......) for modeling and forecasting. It is argued that this gives models and predictions which better reflect reality. The SDE approach also offers a more adequate framework for modeling and a number of efficient tools for model building. A software package (CTSM-R) for SDE-based modeling is briefly described....... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...

  9. Predictive models for arteriovenous fistula maturation.

    Science.gov (United States)

    Al Shakarchi, Julien; McGrogan, Damian; Van der Veer, Sabine; Sperrin, Matthew; Inston, Nicholas

    2016-05-07

    Haemodialysis (HD) is a lifeline therapy for patients with end-stage renal disease (ESRD). A critical factor in the survival of renal dialysis patients is the surgical creation of vascular access, and international guidelines recommend arteriovenous fistulas (AVF) as the gold standard of vascular access for haemodialysis. Despite this, AVFs have been associated with high failure rates. Although risk factors for AVF failure have been identified, their utility for predicting AVF failure through predictive models remains unclear. The objectives of this review are to systematically and critically assess the methodology and reporting of studies developing prognostic predictive models for AVF outcomes and assess them for suitability in clinical practice. Electronic databases were searched for studies reporting prognostic predictive models for AVF outcomes. Dual review was conducted to identify studies that reported on the development or validation of a model constructed to predict AVF outcome following creation. Data were extracted on study characteristics, risk predictors, statistical methodology, model type, as well as validation process. We included four different studies reporting five different predictive models. Parameters identified that were common to all scoring system were age and cardiovascular disease. This review has found a small number of predictive models in vascular access. The disparity between each study limits the development of a unified predictive model.

  10. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optim...

  11. System Predicts Critical Runway Performance Parameters

    Science.gov (United States)

    Millen, Ernest W.; Person, Lee H., Jr.

    1990-01-01

    Runway-navigation-monitor (RNM) and critical-distances-process electronic equipment designed to provide pilot with timely and reliable predictive navigation information relating to takeoff, landing and runway-turnoff operations. Enables pilot to make critical decisions about runway maneuvers with high confidence during emergencies. Utilizes ground-referenced position data only to drive purely navigational monitor system independent of statuses of systems in aircraft.

  12. PREDICTION OF GAS INJECTION PERFORMANCE FOR HETEROGENEOUS RESERVOIRS

    Energy Technology Data Exchange (ETDEWEB)

    Martin J. Blunt; Franklin M. Orr Jr

    2000-06-01

    This final report describes research carried out in the Department of Petroleum Engineering at Stanford University from September 1996--May 2000 under a three-year grant from the Department of Energy on the ''Prediction of Gas Injection Performance for Heterogeneous Reservoirs''. The advances from the research include: new tools for streamline-based simulation including the effects of gravity, changing well conditions, and compositional displacements; analytical solutions to 1D compositional displacements which can speed-up gas injection simulation still further; and modeling and experiments that delineate the physics that is unique to three-phase flow.

  13. Prediction of Gas Injection Performance for Heterogeneous Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Franklin M. Orr, Jr; Martin J. Blunt

    1998-03-31

    This project performs research in four main areas: laboratory experiments to measure three-phase relative permeability; network modeling to predict three-phase relative perme- ability; benchmark simulations of gas injection and waterfl ooding at the field scale; and the development of fast streamline techniques to study field-scale oil. The aim of the work is to achieve a comprehensive description of gas injection processes from the pore to the core to the reservoir scale. In this report we provide a detailed description of our measurements of three-phase relative permeability.

  14. Performance samples on academic tasks : improving prediction of academic performance

    NARCIS (Netherlands)

    Tanilon, Jenny

    2011-01-01

    This thesis is about the development and validation of a performance-based test, labeled as Performance Samples on academic tasks in Education and Child Studies (PSEd). PSEd is designed to identify students who are most able to perform the academic tasks involved in an Education and Child Studies

  15. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Hybrid approaches to physiologic modeling and prediction

    Science.gov (United States)

    Olengü, Nicholas O.; Reifman, Jaques

    2005-05-01

    This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.

  17. Assessing Ecosystem Model Performance in Semiarid Systems

    Science.gov (United States)

    Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.

    2017-12-01

    In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.

  18. Finite element model predicts the biomechanical performance of cervical disc replacement and fusion hybrid surgery with various geometry of ball-and-socket artificial disc.

    Science.gov (United States)

    Li, Yang; Fogel, Guy R; Liao, Zhenhua; Liu, Weiqiang

    2017-08-01

    Few finite element studies have investigated changes in cervical biomechanics with various prosthesis design parameters using hybrid surgery (HS), and none have investigated those combined different HS strategies. The aim of our study was to investigate the effect of ball-and-socket prosthesis geometry on the biomechanical performance of the cervical spine combined with two HS constructs. Two HS strategies were conducted: (1) ACDF at C4-C5 and anterior cervical disc replacement (ACDR) at C5-C6 (ACDF/ACDR), and (2) ACDR/ACDF. Three different prostheses were used for each HS strategy: prosthesis with the core located at the center of the inferior endplate with a radius of 5 mm (BS-5) or 6 mm (BS-6), or with a 5 mm radius core located 1 mm posterior to the center of the inferior endplate (PBS-5). Flexion and extension motions were simulated under displacement control. The flexion motions in supra- and infra-adjacent levels increased in all cases. The corresponding extension motions increased with all prostheses in ACDR/ACDF group. The stiffness in flexion and extension increased with all HS models, except for the extension stiffness with ACDF/ACDR. The facet stresses between the index and infra-adjacent level in ACDR/ACDF were significantly greater than those in the intact model . The stresses on the BS-5 UHMWPE core were greater than the yield stress. The core radii and position did not significantly affect the moments, ROM, and facet stress in extension. However, the moments and ROM in flexion were easily affected by the position. The results implied that the large core radii and posterior core position in ACDR designs may reduce the risk of subsidence and wear in the long term as they showed relative low stress . The ACDF/ACDR surgery at C4-C6 level may be an optimal treatment for avoiding accelerating the degeneration of adjacent segments.

  19. Characterising performance of environmental models

    NARCIS (Netherlands)

    Bennett, N.D.; Croke, B.F.W.; Guariso, G.; Guillaume, J.H.A.; Hamilton, S.H.; Jakeman, A.J.; Marsili-Libelli, S.; Newham, L.T.H.; Norton, J.; Perrin, C.; Pierce, S.; Robson, B.; Seppelt, R.; Voinov, A.; Fath, B.D.; Andreassian, V.

    2013-01-01

    In order to use environmental models effectively for management and decision-making, it is vital to establish an appropriate level of confidence in their performance. This paper reviews techniques available across various fields for characterising the performance of environmental models with focus

  20. Predicting thermal performance in occupied dwellings

    Energy Technology Data Exchange (ETDEWEB)

    Kruger, E.; Givoni, B. [Energy Engineering Section, Department of Mechanical Engineering, Technical University of Denmark, Lyngby (Denmark)

    2004-07-01

    The main purpose of formulating methodologies for building systems' evaluation in low-cost housing is to find an effective solution for the huge Brazilian housing deficit of approximately five million housing units, mainly due to an accelerated population growth in urban centers. Low-cost housing programs are usually implemented in a broad sense, with no regard to local specific conditions. Thus, building systems of quite similar characteristics are employed in places with different climatic conditions, which leads to low-quality houses that do not respond to the users' needs. In this paper, the results of the application of formulas to predict daily indoor temperatures in three monitored low-cost houses in Curitiba, Brazil, are presented. The houses were occupied by families having neither cooling nor heating devices and are built of different building materials with different thermal properties. The monitoring of the houses took place both in winter and in summer. Measured data were also compared with simulated data. In this case, the French software COMFIE was used. Finally, the results of the thermal simulations were compared with those of predictive formulas developed by Givoni. (author)

  1. Performance samples on academic tasks: improving prediction of academic performance

    OpenAIRE

    Tanilon, Jenny

    2011-01-01

    This thesis is about the development and validation of a performance-based test, labeled as Performance Samples on academic tasks in Education and Child Studies (PSEd). PSEd is designed to identify students who are most able to perform the academic tasks involved in an Education and Child Studies bridging program. Many Dutch universities set up bridging programs that aim to prepare students with non-university degrees in the Netherlands for Master’s programs at the university level. Some univ...

  2. Modeling number of claims and prediction of total claim amount

    Science.gov (United States)

    Acar, Aslıhan Şentürk; Karabey, Uǧur

    2017-07-01

    In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.

  3. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Whitmore, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kaffine, Leah [National Renewable Energy Lab. (NREL), Golden, CO (United States); Blair, Nate [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dobos, Aron P. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  4. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  5. Predictive modeling: potential application in prevention services.

    Science.gov (United States)

    Wilson, Moira L; Tumen, Sarah; Ota, Rissa; Simmers, Anthony G

    2015-05-01

    In 2012, the New Zealand Government announced a proposal to introduce predictive risk models (PRMs) to help professionals identify and assess children at risk of abuse or neglect as part of a preventive early intervention strategy, subject to further feasibility study and trialing. The purpose of this study is to examine technical feasibility and predictive validity of the proposal, focusing on a PRM that would draw on population-wide linked administrative data to identify newborn children who are at high priority for intensive preventive services. Data analysis was conducted in 2013 based on data collected in 2000-2012. A PRM was developed using data for children born in 2010 and externally validated for children born in 2007, examining outcomes to age 5 years. Performance of the PRM in predicting administratively recorded substantiations of maltreatment was good compared to the performance of other tools reviewed in the literature, both overall, and for indigenous Māori children. Some, but not all, of the children who go on to have recorded substantiations of maltreatment could be identified early using PRMs. PRMs should be considered as a potential complement to, rather than a replacement for, professional judgment. Trials are needed to establish whether risks can be mitigated and PRMs can make a positive contribution to frontline practice, engagement in preventive services, and outcomes for children. Deciding whether to proceed to trial requires balancing a range of considerations, including ethical and privacy risks and the risk of compounding surveillance bias. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  6. Accurate torque-speed performance prediction for brushless dc motors

    Science.gov (United States)

    Gipper, Patrick D.

    Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.

  7. Predicting Students' Performance in the Senior Secondary ...

    African Journals Online (AJOL)

    cce

    the use of z- test, correlation analysis and multiple regression. The findings revealed .... The choice of the subjects was in accordance with ... Adeyemi, T. O.. 44. Table 2: Credit Performance in SSC Examinations in Sampled Schools. Years. English language. Mathematics Physics. Chemistry. Biology. %. %. %. %. %. 2000. 8.

  8. Challenges of student selection: Predicting academic performance ...

    African Journals Online (AJOL)

    Finding accurate predictors of tertiary academic performance, specifically for disadvantaged students, is essential because of budget constraints and the need of the labour market to address employment equity. Increased retention, throughput and decreased dropout rates are vital. When making admission decisions, the

  9. Goal Setting and Expectancy Theory Predictions of Effort and Performance.

    Science.gov (United States)

    Dossett, Dennis L.; Luce, Helen E.

    Neither expectancy (VIE) theory nor goal setting alone are effective determinants of individual effort and task performance. To test the combined ability of VIE and goal setting to predict effort and performance, 44 real estate agents and their managers completed questionnaires. Quarterly income goals predicted managers' ratings of agents' effort,…

  10. Prediction Model for Gastric Cancer Incidence in Korean Population.

    Science.gov (United States)

    Eom, Bang Wool; Joo, Jungnam; Kim, Sohee; Shin, Aesun; Yang, Hye-Ryung; Park, Junghyun; Choi, Il Ju; Kim, Young-Woo; Kim, Jeongseon; Nam, Byung-Ho

    2015-01-01

    Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea. Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope. During a median of 11.4 years of follow-up, 19,465 (1.4%) and 5,579 (0.7%) newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women). In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.

  11. Prediction Model for Gastric Cancer Incidence in Korean Population.

    Directory of Open Access Journals (Sweden)

    Bang Wool Eom

    Full Text Available Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea.Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope.During a median of 11.4 years of follow-up, 19,465 (1.4% and 5,579 (0.7% newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women.In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.

  12. Predicting sales performance: Strengthening the personality – job performance linkage

    NARCIS (Netherlands)

    T.B. Sitser (Thomas)

    2014-01-01

    markdownabstract__Abstract__ Many organizations worldwide use personality measures to select applicants for sales jobs or to assess incumbent sales employees. In the present dissertation, consisting of four independent studies, five approaches to strengthen the personality-sales performance

  13. Prediction of Gas Injection Performance for Heterogeneous Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Blunt, Martin J.; Orr, Jr., Franklin M.

    1999-12-20

    This report describes research carried out in the Department of Petroleum Engineering at Stanford University from September 1998 - September 1998 under the third year of a three-year Department of Energy (DOE) grant on the ''Prediction of Gas Injection Performance for Heterogeneous Reservoirs''. The research effort is an integrated study of the factors affecting gas injection, from the pore scale to the field scale, and involves theoretical analysis, laboratory experiments and numerical simulation. The research is divided into four main areas: (1) Pore scale modeling of three-phase flow in porous media; (2) Laboratory experiments and analysis of factors influencing gas injection performance at the core scale with an emphasis on the fundamentals of three-phase flow; (3) Benchmark simulations of gas injection at the field scale; and (4) Development of streamline-based reservoir simulator.

  14. A Global Model for Bankruptcy Prediction.

    Science.gov (United States)

    Alaminos, David; Del Castillo, Agustín; Fernández, Manuel Ángel

    2016-01-01

    The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy.

  15. Alpine Skiing Recommendation Tool and Performance Prediction

    Directory of Open Access Journals (Sweden)

    Camille Brousseau

    2018-02-01

    Full Text Available Selecting appropriate skis remains a difficult task for many customers due to the lack of information provided on the bending and torsional stiffnesses of these products. This work investigates how these mechanical properties influence the on-snow ski performance and how an individual skier profile is related to its preferred mechanical properties. To do so, twelve skis were manufactured to exhibit large variations in stiffnesses. Twenty-three skiers provided on-snow feedback and skier profiles through a questionnaire. Simple and multivariable linear correlation analyses were carried out between the skier profile data, their evaluations of the skis and the stiffnesses of the skis. Strong relationships were found between the properties of the skis and some performance criteria, and between the profile of the skiers and the properties of their favourite skis. With further testing, these relationships could be used to design personalized recommendation tools or to guide the design of custom skis.

  16. Design and Performance Analysis of Incremental Networked Predictive Control Systems.

    Science.gov (United States)

    Pang, Zhong-Hua; Liu, Guo-Ping; Zhou, Donghua

    2016-06-01

    This paper is concerned with the design and performance analysis of networked control systems with network-induced delay, packet disorder, and packet dropout. Based on the incremental form of the plant input-output model and an incremental error feedback control strategy, an incremental networked predictive control (INPC) scheme is proposed to actively compensate for the round-trip time delay resulting from the above communication constraints. The output tracking performance and closed-loop stability of the resulting INPC system are considered for two cases: 1) plant-model match case and 2) plant-model mismatch case. For the former case, the INPC system can achieve the same output tracking performance and closed-loop stability as those of the corresponding local control system. For the latter case, a sufficient condition for the stability of the closed-loop INPC system is derived using the switched system theory. Furthermore, for both cases, the INPC system can achieve a zero steady-state output tracking error for step commands. Finally, both numerical simulations and practical experiments on an Internet-based servo motor system illustrate the effectiveness of the proposed method.

  17. Using the 2 x 2 Framework of Achievement Goals to Predict Achievement Emotions and Academic Performance

    Science.gov (United States)

    Putwain, David W.; Sander, Paul; Larkin, Derek

    2013-01-01

    Previous work has established how achievement emotions are related to the trichotomous model of achievement goals, and how they predict academic performance. In our study we examine relations using an additional, mastery-avoidance goal, and whether outcome-focused emotions are predicted by mastery as well as performance goals. Results showed that…

  18. Firm Sustainability Performance Index Modeling

    Directory of Open Access Journals (Sweden)

    Che Wan Jasimah Bt Wan Mohamed Radzi

    2015-12-01

    Full Text Available The main objective of this paper is to bring a model for firm sustainability performance index by applying both classical and Bayesian structural equation modeling (parametric and semi-parametric modeling. Both techniques are considered to the research data collected based on a survey directed to the China, Taiwan, and Malaysia food manufacturing industry. For estimating firm sustainability performance index we consider three main indicators include knowledge management, organizational learning, and business strategy. Based on the both Bayesian and classical methodology, we confirmed that knowledge management and business strategy have significant impact on firm sustainability performance index.

  19. Predicting students' intention to use stimulants for academic performance enhancement.

    Science.gov (United States)

    Ponnet, Koen; Wouters, Edwin; Walrave, Michel; Heirman, Wannes; Van Hal, Guido

    2015-02-01

    The non-medical use of stimulants for academic performance enhancement is becoming a more common practice among college and university students. The objective of this study is to gain a better understanding of students' intention to use stimulant medication for the purpose of enhancing their academic performance. Based on an extended model of Ajzen's theory of planned behavior, we examined the predictive value of attitude, subjective norm, perceived behavioral control, psychological distress, procrastination, substance use, and alcohol use on students' intention to use stimulants to improve their academic performance. The sample consisted of 3,589 Flemish university and college students (mean age: 21.59, SD: 4.09), who participated anonymously in an online survey conducted in March and April 2013. Structural equation modeling was used to investigate the relationships among the study variables. Our results indicate that subjective norm is the strongest predictor of students' intention to use stimulant medication, followed by attitude and perceived behavioral control. To a lesser extent, procrastinating tendencies, psychological distress, and substance abuse contribute to students' intention. Conclusions/ Importance: Based on these findings, we provide several recommendations on how to curtail students' intention to use stimulant medication for the purpose of improving their academic performance. In addition, we urge researchers to identify other psychological variables that might be related to students' intention.

  20. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  1. Critical review of glass performance modeling

    International Nuclear Information System (INIS)

    Bourcier, W.L.

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process

  2. Critical review of glass performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process.

  3. Predicting human performance differences on multiple interface alternatives: KLM, GOMS and CogTool are unreliable

    NARCIS (Netherlands)

    Jorritsma, Wiard; Haga, Peter-Jan; Cnossen, Fokie; Dierckx, Rudi; Oudkerk, Matthijs; van Ooijen, Peter

    2015-01-01

    Cognitive modeling tools, such as KLM, GOMS and CogTool, can be used to predict human performance on interface designs before they are implemented and without the need for user testing. The model predictions can inform interface design, because they allow designers to quantitatively compare multiple

  4. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  5. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  8. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  9. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  10. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  11. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  12. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  13. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  14. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  15. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  16. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  17. Cold-Blooded Attention: Finger Temperature Predicts Attentional Performance

    Science.gov (United States)

    Vergara, Rodrigo C.; Moënne-Loccoz, Cristóbal; Maldonado, Pedro E.

    2017-01-01

    Thermal stress has been shown to increase the chances of unsafe behavior during industrial and driving performances due to reductions in mental and attentional resources. Nonetheless, establishing appropriate safety standards regarding environmental temperature has been a major problem, as modulations are also be affected by the task type, complexity, workload, duration, and previous experience with the task. To bypass this attentional and thermoregulatory problem, we focused on the body rather than environmental temperature. Specifically, we measured tympanic, forehead, finger and environmental temperatures accompanied by a battery of attentional tasks. We considered a 10 min baseline period wherein subjects were instructed to sit and relax, followed by three attentional tasks: a continuous performance task (CPT), a flanker task (FT) and a counting task (CT). Using multiple linear regression models, we evaluated which variable(s) were the best predictors of performance. The results showed a decrement in finger temperature due to instruction and task engagement that was absent when the subject was instructed to relax. No changes were observed in tympanic or forehead temperatures, while the environmental temperature remained almost constant for each subject. Specifically, the magnitude of the change in finger temperature was the best predictor of performance in all three attentional tasks. The results presented here suggest that finger temperature can be used as a predictor of alertness, as it predicted performance in attentional tasks better than environmental temperature. These findings strongly support that peripheral temperature can be used as a tool to prevent unsafe behaviors and accidents. PMID:28955215

  18. Cold-Blooded Attention: Finger Temperature Predicts Attentional Performance

    Directory of Open Access Journals (Sweden)

    Rodrigo C. Vergara

    2017-09-01

    Full Text Available Thermal stress has been shown to increase the chances of unsafe behavior during industrial and driving performances due to reductions in mental and attentional resources. Nonetheless, establishing appropriate safety standards regarding environmental temperature has been a major problem, as modulations are also be affected by the task type, complexity, workload, duration, and previous experience with the task. To bypass this attentional and thermoregulatory problem, we focused on the body rather than environmental temperature. Specifically, we measured tympanic, forehead, finger and environmental temperatures accompanied by a battery of attentional tasks. We considered a 10 min baseline period wherein subjects were instructed to sit and relax, followed by three attentional tasks: a continuous performance task (CPT, a flanker task (FT and a counting task (CT. Using multiple linear regression models, we evaluated which variable(s were the best predictors of performance. The results showed a decrement in finger temperature due to instruction and task engagement that was absent when the subject was instructed to relax. No changes were observed in tympanic or forehead temperatures, while the environmental temperature remained almost constant for each subject. Specifically, the magnitude of the change in finger temperature was the best predictor of performance in all three attentional tasks. The results presented here suggest that finger temperature can be used as a predictor of alertness, as it predicted performance in attentional tasks better than environmental temperature. These findings strongly support that peripheral temperature can be used as a tool to prevent unsafe behaviors and accidents.

  19. Prediction of hourly solar radiation with multi-model framework

    International Nuclear Information System (INIS)

    Wu, Ji; Chan, Chee Keong

    2013-01-01

    Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others

  20. A statistical study of the performance of the Hakamada-Akasofu-Fry version 2 numerical model in predicting solar shock arrival times at Earth during different phases of solar cycle 23

    Directory of Open Access Journals (Sweden)

    S. M. P. McKenna-Lawlor

    2012-02-01

    Full Text Available The performance of the Hakamada Akasofu-Fry, version 2 (HAFv.2 numerical model, which provides predictions of solar shock arrival times at Earth, was subjected to a statistical study to investigate those solar/interplanetary circumstances under which the model performed well/poorly during key phases (rise/maximum/decay of solar cycle 23. In addition to analyzing elements of the overall data set (584 selected events associated with particular cycle phases, subsets were formed such that those events making up a particular sub-set showed common characteristics. The statistical significance of the results obtained using the various sets/subsets was generally very low and these results were not significant as compared with the hit by chance rate (50%. This implies a low level of confidence in the predictions of the model with no compelling result encouraging its use. However, the data suggested that the success rates of HAFv.2 were higher when the background solar wind speed at the time of shock initiation was relatively fast. Thus, in scenarios where the background solar wind speed is elevated and the calculated success rate significantly exceeds the rate by chance, the forecasts could provide potential value to the customer. With the composite statistics available for solar cycle 23, the calculated success rate at high solar wind speed, although clearly above 50%, was indicative rather than conclusive. The RMS error estimated for shock arrival times for every cycle phase and for the composite sample was in each case significantly better than would be expected for a random data set. Also, the parameter "Probability of Detection, yes" (PODy which presents the Proportion of Yes observations that were correctly forecast (i.e. the ratio between the shocks correctly predicted and all the shocks observed, yielded values for the rise/maximum/decay phases of the cycle and using the composite sample of 0.85, 0.64, 0.79 and 0.77, respectively. The statistical

  1. Performance analysis of tracked panel according to predicted global radiation

    International Nuclear Information System (INIS)

    Chang, T.P.

    2009-01-01

    In this paper, the performance of a south facing single-axis tracked panel was analyzed according to global radiation predicted by empirical model. Mathematic expressions appropriate for single-axis tracking system were derived to calculate the radiation on it. Instantaneous increments of solar energy collected by the tracked panel relative to fixed panel are illustrated. The validity of the empirical model to Taiwan area will also be examined with the actual irradiation data observed in Taipei. The results are summarized as follows: the gains made by the tracked panel relative to a fixed panel are between 20.0% and 33.9% for four specified days of year, between 20.9% and 33.2% for the four seasons and 27.6% over the entire year. For latitudes below 65 deg., the yearly optimal tilt angle of a fixed panel is close to 0.8 times latitude, the irradiation ratio of the tracked panel to the fixed panel is about 1.3, which are smaller than the corresponding values calculated from extraterrestrial radiation, suggesting us that the installation angle should be adjusted toward a flatter angle and that the gain of the tracked panel will reduce while it works in cloudy climate or in air pollution environment. Although the captured radiation increases with the maximal rotation angle of panel, but the benefit on the global radiation case is still not so good as that on extraterrestrial radiation case. The irradiation data observed is much less than the data predicted by the empirical model, however the trend of fitting curve to the observed data is somewhat in agreement with that to the predicted one; the yearly gain is 14.3% when a tracked panel is employed throughout the year.

  2. Predicting and Modeling RNA Architecture

    Science.gov (United States)

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  3. Mechanistic-Empirical Pavement Design Guide Flexible Pavement Performance Prediction Models Volume III Field Guide - Calibration and User's Guide for the Mechanistic-Empirical Pavement Design Guide

    Science.gov (United States)

    2007-08-01

    The objective of this research study was to develop performance characteristics or variables (e.g., ride quality, rutting, : fatigue cracking, transverse cracking) of flexible pavements in Montana, and to use these characteristics in the : implementa...

  4. Multiple Steps Prediction with Nonlinear ARX Models

    OpenAIRE

    Zhang, Qinghua; Ljung, Lennart

    2007-01-01

    NLARX (NonLinear AutoRegressive with eXogenous inputs) models are frequently used in black-box nonlinear system identication. Though it is easy to make one step ahead prediction with such models, multiple steps prediction is far from trivial. The main difficulty is that in general there is no easy way to compute the mathematical expectation of an output conditioned by past measurements. An optimal solution would require intensive numerical computations related to nonlinear filltering. The pur...

  5. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  6. Model complexity control for hydrologic prediction

    Science.gov (United States)

    Schoups, G.; van de Giesen, N. C.; Savenije, H. H. G.

    2008-12-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore needed. We compare three model complexity control methods for hydrologic prediction, namely, cross validation (CV), Akaike's information criterion (AIC), and structural risk minimization (SRM). Results show that simulation of water flow using non-physically-based models (polynomials in this case) leads to increasingly better calibration fits as the model complexity (polynomial order) increases. However, prediction uncertainty worsens for complex non-physically-based models because of overfitting of noisy data. Incorporation of physically based constraints into the model (e.g., storage-discharge relationship) effectively bounds prediction uncertainty, even as the number of parameters increases. The conclusion is that overparameterization and equifinality do not lead to a continued increase in prediction uncertainty, as long as models are constrained by such physical principles. Complexity control of hydrologic models reduces parameter equifinality and identifies the simplest model that adequately explains the data, thereby providing a means of hydrologic generalization and classification. SRM is a promising technique for this purpose, as it (1) provides analytic upper bounds on prediction uncertainty, hence avoiding the computational burden of CV, and (2) extends the applicability of classic methods such as AIC to finite data. The main hurdle in applying SRM is the need for an a priori estimation of the complexity of the hydrologic model, as measured by its Vapnik-Chernovenkis (VC) dimension. Further research is needed in this area.

  7. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    Science.gov (United States)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  8. Estimation of the upper bound of predictive performance for alternative models that use in vivo reference data (OpenTox USA 2017)

    Science.gov (United States)

    The number of chemicals with limited toxicological information for chemical safety decision-making has accelerated alternative model development, which often are evaluated via referencing animal toxicology studies. In vivo studies are generally considered the standard for hazard ...

  9. Performance predictions improve prospective memory and influence retrieval experience.

    Science.gov (United States)

    Meier, Beat; von Wartburg, Philipp; Matter, Sibylle; Rothen, Nicolas; Reber, Rolf

    2011-03-01

    In retrospective memory, performance predictions have been found to enhance performance on subsequent memory tests. In prospective memory, the influence of metacognitive judgments on performance has not been investigated systematically. In the present study, 140 undergraduate students performed a complex short-term memory task that included a prospective memory task. Half of them gave performance predictions after the prospective memory task instructions. In addition, the specificity of the prospective memory task was manipulated by instructing participants either to perform an action when a word that belongs to the category of musical instruments was presented or to respond when the word "trumpet" was presented. The results showed that performance predictions enhanced performance, but only for the categorical task. Additional analyses of retrieval experience showed that performance predictions lead to an increase in search experiences while cue specificity was accompanied by an increase in pop up experiences. The results indicate that performance predictions can improve prospective performance and thus may be a valuable strategy for assisting prospective memory. (PsycINFO Database Record (c) 2011 APA, all rights reserved).

  10. Quantifying predictive accuracy in survival models.

    Science.gov (United States)

    Lirette, Seth T; Aban, Inmaculada

    2017-12-01

    For time-to-event outcomes in medical research, survival models are the most appropriate to use. Unlike logistic regression models, quantifying the predictive accuracy of these models is not a trivial task. We present the classes of concordance (C) statistics and R 2 statistics often used to assess the predictive ability of these models. The discussion focuses on Harrell's C, Kent and O'Quigley's R 2 , and Royston and Sauerbrei's R 2 . We present similarities and differences between the statistics, discuss the software options from the most widely used statistical analysis packages, and give a practical example using the Worcester Heart Attack Study dataset.

  11. Predictive power of nuclear-mass models

    Directory of Open Access Journals (Sweden)

    Yu. A. Litvinov

    2013-12-01

    Full Text Available Ten different theoretical models are tested for their predictive power in the description of nuclear masses. Two sets of experimental masses are used for the test: the older set of 2003 and the newer one of 2011. The predictive power is studied in two regions of nuclei: the global region (Z, N ≥ 8 and the heavy-nuclei region (Z ≥ 82, N ≥ 126. No clear correlation is found between the predictive power of a model and the accuracy of its description of the masses.

  12. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  13. The prediction of the hydrodynamic performance of tidal current turbines

    International Nuclear Information System (INIS)

    Xiao, B Y; Zhou, L J; Xiao, Y X; Wang, Z W

    2013-01-01

    Nowadays tidal current energy is considered to be one of the most promising alternative green energy resources and tidal current turbines are used for power generation. Prediction of the open water performance around tidal turbines is important for the reason that it can give some advice on installation and array of tidal current turbines. This paper presents numerical computations of tidal current turbines by using a numerical model which is constructed to simulate an isolated turbine. This paper aims at studying the installation of marine current turbine of which the hydro-environmental impacts influence by means of numerical simulation. Such impacts include free-stream velocity magnitude, seabed and inflow direction of velocity. The results of the open water performance prediction show that the power output and efficiency of marine current turbine varies from different marine environments. The velocity distribution should be clearly and the suitable unit installation depth and direction be clearly chosen, which can ensure the most effective strategy for energy capture before installing the marine current turbine. The findings of this paper are expected to be beneficial in developing tidal current turbines and array in the future

  14. Evaluation of preformance of Predictive Models for Deoxynivalenol in Wheat

    NARCIS (Netherlands)

    Fels, van der H.J.

    2014-01-01

    The aim of this study was to evaluate the performance of two predictive models for deoxynivalenol contamination of wheat at harvest in the Netherlands, including the use of weather forecast data and external model validation. Data were collected in a different year and from different wheat fields

  15. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  16. Predicting Student Performance in a Collaborative Learning Environment

    Science.gov (United States)

    Olsen, Jennifer K.; Aleven, Vincent; Rummel, Nikol

    2015-01-01

    Student models for adaptive systems may not model collaborative learning optimally. Past research has either focused on modeling individual learning or for collaboration, has focused on group dynamics or group processes without predicting learning. In the current paper, we adjust the Additive Factors Model (AFM), a standard logistic regression…

  17. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  18. New Approaches for Channel Prediction Based on Sinusoidal Modeling

    Directory of Open Access Journals (Sweden)

    Ekman Torbjörn

    2007-01-01

    Full Text Available Long-range channel prediction is considered to be one of the most important enabling technologies to future wireless communication systems. The prediction of Rayleigh fading channels is studied in the frame of sinusoidal modeling in this paper. A stochastic sinusoidal model to represent a Rayleigh fading channel is proposed. Three different predictors based on the statistical sinusoidal model are proposed. These methods outperform the standard linear predictor (LP in Monte Carlo simulations, but underperform with real measurement data, probably due to nonstationary model parameters. To mitigate these modeling errors, a joint moving average and sinusoidal (JMAS prediction model and the associated joint least-squares (LS predictor are proposed. It combines the sinusoidal model with an LP to handle unmodeled dynamics in the signal. The joint LS predictor outperforms all the other sinusoidal LMMSE predictors in suburban environments, but still performs slightly worse than the standard LP in urban environments.

  19. MODELING SUPPLY CHAIN PERFORMANCE VARIABLES

    Directory of Open Access Journals (Sweden)

    Ashish Agarwal

    2005-01-01

    Full Text Available In order to understand the dynamic behavior of the variables that can play a major role in the performance improvement in a supply chain, a System Dynamics-based model is proposed. The model provides an effective framework for analyzing different variables affecting supply chain performance. Among different variables, a causal relationship among different variables has been identified. Variables emanating from performance measures such as gaps in customer satisfaction, cost minimization, lead-time reduction, service level improvement and quality improvement have been identified as goal-seeking loops. The proposed System Dynamics-based model analyzes the affect of dynamic behavior of variables for a period of 10 years on performance of case supply chain in auto business.

  20. Model Predictive Control of Three Phase Inverter for PV Systems

    OpenAIRE

    Irtaza M. Syed; Kaamran Raahemifar

    2015-01-01

    This paper presents a model predictive control (MPC) of a utility interactive three phase inverter (TPI) for a photovoltaic (PV) system at commercial level. The proposed model uses phase locked loop (PLL) to synchronize the TPI with the power electric grid (PEG) and performs MPC control in a dq reference frame. TPI model consists of a boost converter (BC), maximum power point tracking (MPPT) control, and a three-leg voltage source inverter (VSI). The operational model of ...

  1. AN EFFICIENT PATIENT INFLOW PREDICTION MODEL FOR HOSPITAL RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Kottalanka Srikanth

    2017-07-01

    Full Text Available There has been increasing demand in improving service provisioning in hospital resources management. Hospital industries work with strict budget constraint at the same time assures quality care. To achieve quality care with budget constraint an efficient prediction model is required. Recently there has been various time series based prediction model has been proposed to manage hospital resources such ambulance monitoring, emergency care and so on. These models are not efficient as they do not consider the nature of scenario such climate condition etc. To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. This induces overhead and affects the accuracy in prediction. To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. Experiment are conducted to evaluate the performance of proposed model inter of RMSE and MAPE. The outcome shows the proposed model reduces RMSE and MAPE over existing back propagation based artificial neural network. The overall outcomes show the proposed prediction model improves the accuracy of prediction which aid in improving the quality of health care management.

  2. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  3. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  4. Air Conditioner Compressor Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Ning; Xie, YuLong; Huang, Zhenyu

    2008-09-05

    During the past three years, the Western Electricity Coordinating Council (WECC) Load Modeling Task Force (LMTF) has led the effort to develop the new modeling approach. As part of this effort, the Bonneville Power Administration (BPA), Southern California Edison (SCE), and Electric Power Research Institute (EPRI) Solutions tested 27 residential air-conditioning units to assess their response to delayed voltage recovery transients. After completing these tests, different modeling approaches were proposed, among them a performance modeling approach that proved to be one of the three favored for its simplicity and ability to recreate different SVR events satisfactorily. Funded by the California Energy Commission (CEC) under its load modeling project, researchers at Pacific Northwest National Laboratory (PNNL) led the follow-on task to analyze the motor testing data to derive the parameters needed to develop a performance models for the single-phase air-conditioning (SPAC) unit. To derive the performance model, PNNL researchers first used the motor voltage and frequency ramping test data to obtain the real (P) and reactive (Q) power versus voltage (V) and frequency (f) curves. Then, curve fitting was used to develop the P-V, Q-V, P-f, and Q-f relationships for motor running and stalling states. The resulting performance model ignores the dynamic response of the air-conditioning motor. Because the inertia of the air-conditioning motor is very small (H<0.05), the motor reaches from one steady state to another in a few cycles. So, the performance model is a fair representation of the motor behaviors in both running and stalling states.

  5. Validation of Fatigue Modeling Predictions in Aviation Operations

    Science.gov (United States)

    Gregory, Kevin; Martinez, Siera; Flynn-Evans, Erin

    2017-01-01

    Bio-mathematical fatigue models that predict levels of alertness and performance are one potential tool for use within integrated fatigue risk management approaches. A number of models have been developed that provide predictions based on acute and chronic sleep loss, circadian desynchronization, and sleep inertia. Some are publicly available and gaining traction in settings such as commercial aviation as a means of evaluating flight crew schedules for potential fatigue-related risks. Yet, most models have not been rigorously evaluated and independently validated for the operations to which they are being applied and many users are not fully aware of the limitations in which model results should be interpreted and applied.

  6. Predictive models for acute kidney injury following cardiac surgery.

    Science.gov (United States)

    Demirjian, Sevag; Schold, Jesse D; Navia, Jose; Mastracci, Tara M; Paganini, Emil P; Yared, Jean-Pierre; Bashour, Charles A

    2012-03-01

    Accurate prediction of cardiac surgery-associated acute kidney injury (AKI) would improve clinical decision making and facilitate timely diagnosis and treatment. The aim of the study was to develop predictive models for cardiac surgery-associated AKI using presurgical and combined pre- and intrasurgical variables. Prospective observational cohort. 25,898 patients who underwent cardiac surgery at Cleveland Clinic in 2000-2008. Presurgical and combined pre- and intrasurgical variables were used to develop predictive models. Dialysis therapy and a composite of doubling of serum creatinine level or dialysis therapy within 2 weeks (or discharge if sooner) after cardiac surgery. Incidences of dialysis therapy and the composite of doubling of serum creatinine level or dialysis therapy were 1.7% and 4.3%, respectively. Kidney function parameters were strong independent predictors in all 4 models. Surgical complexity reflected by type and history of previous cardiac surgery were robust predictors in models based on presurgical variables. However, the inclusion of intrasurgical variables accounted for all explained variance by procedure-related information. Models predictive of dialysis therapy showed good calibration and superb discrimination; a combined (pre- and intrasurgical) model performed better than the presurgical model alone (C statistics, 0.910 and 0.875, respectively). Models predictive of the composite end point also had excellent discrimination with both presurgical and combined (pre- and intrasurgical) variables (C statistics, 0.797 and 0.825, respectively). However, the presurgical model predictive of the composite end point showed suboptimal calibration (P predictive models in other cohorts is required before wide-scale application. We developed and internally validated 4 new models that accurately predict cardiac surgery-associated AKI. These models are based on readily available clinical information and can be used for patient counseling, clinical

  7. Predicting expatriate job performance for selection purposes: A quantitative review

    NARCIS (Netherlands)

    H.T. van der Molen (Henk); M.Ph. Born (Marise); M.E. Willemsen (Madde)

    2005-01-01

    textabstractThis article meta-analytically reviews empirical studies on the prediction of expatriate job performance. Using 30 primary studies (total N=4,046), it was found that predictive validities of the Big Five were similar to Big Five validities reported for domestic employees. Extraversion,

  8. Predicting Expatriate Job Performance for Selection Purposes: A Quantitative Review

    NARCIS (Netherlands)

    S.T. Mol (Stefan); M.Ph. Born (Marise); M.E. Willemsen (Madde); H.T. van der Molen (Henk)

    2005-01-01

    textabstractThis article meta-analytically reviews empirical studies on the prediction of expatriate job performance. Using 30 primary studies (total N=4046), it was found that predictive validities of the big five were similar to big five validities reported for domestic employees (Barrick & Mount,

  9. Mastery and Performance Goals Predict Epistemic and Relational Conflict Regulation

    Science.gov (United States)

    Darnon, Celine; Muller, Dominique; Schrager, Sheree M.; Pannuzzo, Nelly; Butera, Fabrizio

    2006-01-01

    The present research examines whether mastery and performance goals predict different ways of reacting to a sociocognitive conflict with another person over materials to be learned, an issue not yet addressed by the achievement goal literature. Results from 2 studies showed that mastery goals predicted epistemic conflict regulation (a conflict…

  10. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...... visualization to improve our understanding of the different attained performances, effectively compiling all the conducted experiments in a meaningful way. We complete our study with an entropy-based analysis that highlights the uncertainty handling properties provided by the GP, crucial for prediction tasks...

  11. Prediction of Student Performance Through Pretesting in Food and Nutrition

    Science.gov (United States)

    Carruth, Betty Ruth; Lamb, Mina W.

    1971-01-01

    Attempts to develop an objective pretest for identifying students' levels of knowledge in food and nutrition prior to class instruction and for predicting student performance on the final examination. (Editor/MU)

  12. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  13. Wireless model predictive control: Application to water-level system

    Directory of Open Access Journals (Sweden)

    Ramdane Hedjar

    2016-04-01

    Full Text Available This article deals with wireless model predictive control of a water-level control system. The objective of the model predictive control algorithm is to constrain the control signal inside saturation limits and maintain the water level around the desired level. Linear modeling of any nonlinear plant leads to parameter uncertainties and non-modeled dynamics in the linearized mathematical model. These uncertainties induce a steady-state error in the output response of the water level. To eliminate this steady-state error and increase the robustness of the control algorithm, an integral action is included in the closed loop. To control the water-level system remotely, the communication between the controller and the process is performed using radio channel. To validate the proposed scheme, simulation and real-time implementation of the algorithm have been conducted, and the results show the effectiveness of wireless model predictive control with integral action.

  14. Evaluation of wave runup predictions from numerical and parametric models

    Science.gov (United States)

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  15. Using micro saint to predict performance in a nuclear power plant control room

    International Nuclear Information System (INIS)

    Lawless, M.T.; Laughery, K.R.; Persenky, J.J.

    1995-09-01

    The United States Nuclear Regulatory Commission (NRC) requires a technical basis for regulatory actions. In the area of human factors, one possible technical basis is human performance modeling technology including task network modeling. This study assessed the feasibility and validity of task network modeling to predict the performance of control room crews. Task network models were built that matched the experimental conditions of a study on computerized procedures that was conducted at North Carolina State University. The data from the open-quotes paper proceduresclose quotes conditions were used to calibrate the task network models. Then, the models were manipulated to reflect expected changes when computerized procedures were used. These models' predictions were then compared to the experimental data from the open-quotes computerized conditionsclose quotes of the North Carolina State University study. Analyses indicated that the models predicted some subsets of the data well, but not all. Implications for the use of task network modeling are discussed

  16. Generating Performance Models for Irregular Applications

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav; Kerbyson, Darren J.; Hoisie, Adolfy

    2017-05-30

    Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scaling when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.

  17. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  18. Are animal models predictive for humans?

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2009-01-01

    Full Text Available Abstract It is one of the central aims of the philosophy of science to elucidate the meanings of scientific terms and also to think critically about their application. The focus of this essay is the scientific term predict and whether there is credible evidence that animal models, especially in toxicology and pathophysiology, can be used to predict human outcomes. Whether animals can be used to predict human response to drugs and other chemicals is apparently a contentious issue. However, when one empirically analyzes animal models using scientific tools they fall far short of being able to predict human responses. This is not surprising considering what we have learned from fields such evolutionary and developmental biology, gene regulation and expression, epigenetics, complexity theory, and comparative genomics.

  19. Predictive Performance Tuning of OpenACC Accelerated Applications

    KAUST Repository

    Siddiqui, Shahzeb

    2014-05-04

    Graphics Processing Units (GPUs) are gradually becoming mainstream in supercomputing as their capabilities to significantly accelerate a large spectrum of scientific applications have been clearly identified and proven. Moreover, with the introduction of high level programming models such as OpenACC [1] and OpenMP 4.0 [2], these devices are becoming more accessible and practical to use by a larger scientific community. However, performance optimization of OpenACC accelerated applications usually requires an in-depth knowledge of the hardware and software specifications. We suggest a prediction-based performance tuning mechanism [3] to quickly tune OpenACC parameters for a given application to dynamically adapt to the execution environment on a given system. This approach is applied to a finite difference kernel to tune the OpenACC gang and vector clauses for mapping the compute kernels into the underlying accelerator architecture. Our experiments show a significant performance improvement against the default compiler parameters and a faster tuning by an order of magnitude compared to the brute force search tuning.

  20. Model predictive controller design of hydrocracker reactors

    OpenAIRE

    GÖKÇE, Dila

    2014-01-01

    This study summarizes the design of a Model Predictive Controller (MPC) in Tüpraş, İzmit Refinery Hydrocracker Unit Reactors. Hydrocracking process, in which heavy vacuum gasoil is converted into lighter and valuable products at high temperature and pressure is described briefly. Controller design description, identification and modeling studies are examined and the model variables are presented. WABT (Weighted Average Bed Temperature) equalization and conversion increase are simulate...

  1. Prediction of Rowing Ergometer Performance from Functional Anaerobic Power, Strength and Anthropometric Components

    Directory of Open Access Journals (Sweden)

    Akça Firat

    2014-07-01

    Full Text Available The aim of this research was to develop different regression models to predict 2000 m rowing ergometer performance with the use of anthropometric, anaerobic and strength variables and to determine how precisely the prediction models constituted by different variables predict performance, when conducted together in the same equation or individually. 38 male collegiate rowers (20.17 ± 1.22 years participated in this study. Anthropometric, strength, 2000 m maximal rowing ergometer and rowing anaerobic power tests were applied. Multiple linear regression procedures were employed in SPSS 16 to constitute five different regression formulas using a different group of variables. The reliability of the regression models was expressed by R2 and the standard error of estimate (SEE. Relationships of all parameters with performance were investigated through Pearson correlation coefficients. The prediction model using a combination of anaerobic, strength and anthropometric variables was found to be the most reliable equation to predict 2000 m rowing ergometer performance (R2 = 0.92, SEE= 3.11 s. Besides, the equation that used rowing anaerobic and strength test results also provided a reliable prediction (R2 = 0.85, SEE= 4.27 s. As a conclusion, it seems clear that physiological determinants which are affected by anaerobic energy pathways should also get involved in the processes and models used for performance prediction and talent identification in rowing.

  2. Application of Machine Learning Algorithms for the Query Performance Prediction

    Directory of Open Access Journals (Sweden)

    MILICEVIC, M.

    2015-08-01

    Full Text Available This paper analyzes the relationship between the system load/throughput and the query response time in a real Online transaction processing (OLTP system environment. Although OLTP systems are characterized by short transactions, which normally entail high availability and consistent short response times, the need for operational reporting may jeopardize these objectives. We suggest a new approach to performance prediction for concurrent database workloads, based on the system state vector which consists of 36 attributes. There is no bias to the importance of certain attributes, but the machine learning methods are used to determine which attributes better describe the behavior of the particular database server and how to model that system. During the learning phase, the system's profile is created using multiple reference queries, which are selected to represent frequent business processes. The possibility of the accurate response time prediction may be a foundation for automated decision-making for database (DB query scheduling. Possible applications of the proposed method include adaptive resource allocation, quality of service (QoS management or real-time dynamic query scheduling (e.g. estimation of the optimal moment for a complex query execution.

  3. Retrosynthetic Reaction Prediction Using Neural Sequence-to-Sequence Models.

    Science.gov (United States)

    Liu, Bowen; Ramsundar, Bharath; Kawthekar, Prasad; Shi, Jade; Gomes, Joseph; Luu Nguyen, Quang; Ho, Stephen; Sloane, Jack; Wender, Paul; Pande, Vijay

    2017-10-25

    We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder-decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step toward solving the challenging problem of computational retrosynthetic analysis.

  4. Pulsatile fluidic pump demonstration and predictive model application

    International Nuclear Information System (INIS)

    Morgan, J.G.; Holland, W.D.

    1986-04-01

    Pulsatile fluidic pumps were developed as a remotely controlled method of transferring or mixing feed solutions. A test in the Integrated Equipment Test facility demonstrated the performance of a critically safe geometry pump suitable for use in a 0.1-ton/d heavy metal (HM) fuel reprocessing plant. A predictive model was developed to calculate output flows under a wide range of external system conditions. Predictive and experimental flow rates are compared for both submerged and unsubmerged fluidic pump cases

  5. Using Pareto points for model identification in predictive toxicology

    Science.gov (United States)

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  6. An efficient approach to understanding and predicting the effects of multiple task characteristics on performance.

    Science.gov (United States)

    Richardson, Miles

    2017-04-01

    In ergonomics there is often a need to identify and predict the separate effects of multiple factors on performance. A cost-effective fractional factorial approach to understanding the relationship between task characteristics and task performance is presented. The method has been shown to provide sufficient independent variability to reveal and predict the effects of task characteristics on performance in two domains. The five steps outlined are: selection of performance measure, task characteristic identification, task design for user trials, data collection, regression model development and task characteristic analysis. The approach can be used for furthering knowledge of task performance, theoretical understanding, experimental control and prediction of task performance. Practitioner Summary: A cost-effective method to identify and predict the separate effects of multiple factors on performance is presented. The five steps allow a better understanding of task factors during the design process.

  7. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  8. The predictive performance of a pharmacokinetic model for manually adjusted infusion of liquid sevofluorane for use with the Anesthetic-Conserving Device (AnaConDa): a clinical study.

    Science.gov (United States)

    Belda, Javier F; Soro, Marina; Badenes, Rafael; Meiser, Andreas; García, María Luisa; Aguilar, Gerardo; Martí, Francisco J

    2008-04-01

    The Anesthetic-Conserving Device (AnaConDa) can be used to administer inhaled anesthetics using an intensive care unit (ICU) ventilator. We evaluated the predictive performance of a simple manually adjusted pump infusion scheme, for infusion of liquid sevoflurane to the AnaConDa. We studied 50 ICU patients who received sevoflurane via the AnaConDa. They were randomly divided into three groups. A 6-h infusion of liquid anesthetic was adjusted according to the infusion scheme to a target end-tidal sevoflurane concentration of 1% (Group 1%, n = 15) and 1.5% (Group 1.5%, n = 15). The initial rate was adjusted to reach the target concentration in 10 min and then the infusion was reduced to the first hour maintenance rate and readjusted once each hour afterwards. The actual concentrations were measured in the breathing circuit and compared with the target values. In the third group (n = 20) we used the model to increase and decrease the target concentration (+/-0.3%) for 3 h and evaluated the actual change in concentration achieved. The ability of the infusion scheme to provide the target concentration was quantified by calculating the performance error (PE). Infusion scheme performance was evaluated in terms of accuracy (median absolute PE, MDAPE) and bias (median PE, MDPE). Performance parameters (mean +/- SD, %) were for 1%, 1.5%, increase of concentration by 0.3% and decrease of concentration by 0.3% groups, respectively: MDAPE 5.3 +/- 5.5, 2.6 +/- 4.0, 5.0 +/- 5.6, 5.5 +/- 5.4; MDPE -5.3 +/- 5.5, -2.3 +/- 4.1, -0.1 +/- 7.1, 0.2 +/- 5.4. No significant differences were found between means of all performance parameters when the 1% and 1.5% groups were compared. There is an excellent 6-h predictive performance of a simplified pharmacokinetic model for manually adjusted infusion of liquid sevoflurane when using the AnaConDa to deliver sevoflurane to ICU patients.

  9. Thermodynamic modeling of activity coefficient and prediction of solubility: Part 1. Predictive models.

    Science.gov (United States)

    Mirmehrabi, Mahmoud; Rohani, Sohrab; Perry, Luisa

    2006-04-01

    A new activity coefficient model was developed from excess Gibbs free energy in the form G(ex) = cA(a) x(1)(b)...x(n)(b). The constants of the proposed model were considered to be function of solute and solvent dielectric constants, Hildebrand solubility parameters and specific volumes of solute and solvent molecules. The proposed model obeys the Gibbs-Duhem condition for activity coefficient models. To generalize the model and make it as a purely predictive model without any adjustable parameters, its constants were found using the experimental activity coefficient and physical properties of 20 vapor-liquid systems. The predictive capability of the proposed model was tested by calculating the activity coefficients of 41 binary vapor-liquid equilibrium systems and showed good agreement with the experimental data in comparison with two other predictive models, the UNIFAC and Hildebrand models. The only data used for the prediction of activity coefficients, were dielectric constants, Hildebrand solubility parameters, and specific volumes of the solute and solvent molecules. Furthermore, the proposed model was used to predict the activity coefficient of an organic compound, stearic acid, whose physical properties were available in methanol and 2-butanone. The predicted activity coefficient along with the thermal properties of the stearic acid were used to calculate the solubility of stearic acid in these two solvents and resulted in a better agreement with the experimental data compared to the UNIFAC and Hildebrand predictive models.

  10. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  11. A revised prediction model for natural conception.

    Science.gov (United States)

    Bensdorp, Alexandra J; van der Steeg, Jan Willem; Steures, Pieternel; Habbema, J Dik F; Hompes, Peter G A; Bossuyt, Patrick M M; van der Veen, Fulco; Mol, Ben W J; Eijkemans, Marinus J C

    2017-06-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis was to assess whether additional predictors can refine the Hunault model and extend its applicability. Consecutive subfertile couples with unexplained and mild male subfertility presenting in fertility clinics were asked to participate in a prospective cohort study. We constructed a multivariable prediction model with the predictors from the Hunault model and new potential predictors. The primary outcome, natural conception leading to an ongoing pregnancy, was observed in 1053 women of the 5184 included couples (20%). All predictors of the Hunault model were selected into the revised model plus an additional seven (woman's body mass index, cycle length, basal FSH levels, tubal status,history of previous pregnancies in the current relationship (ongoing pregnancies after natural conception, fertility treatment or miscarriages), semen volume, and semen morphology. Predictions from the revised model seem to concur better with observed pregnancy rates compared with the Hunault model; c-statistic of 0.71 (95% CI 0.69 to 0.73) compared with 0.59 (95% CI 0.57 to 0.61). Copyright © 2017. Published by Elsevier Ltd.

  12. Analyzing Log Files to Predict Students' Problem Solving Performance in a Computer-Based Physics Tutor

    Science.gov (United States)

    Lee, Young-Jin

    2015-01-01

    This study investigates whether information saved in the log files of a computer-based tutor can be used to predict the problem solving performance of students. The log files of a computer-based physics tutoring environment called Andes Physics Tutor was analyzed to build a logistic regression model that predicted success and failure of students'…

  13. Individualized prediction of perineural invasion in colorectal cancer: development and validation of a radiomics prediction model.

    Science.gov (United States)

    Huang, Yanqi; He, Lan; Dong, Di; Yang, Caiyun; Liang, Cuishan; Chen, Xin; Ma, Zelan; Huang, Xiaomei; Yao, Su; Liang, Changhong; Tian, Jie; Liu, Zaiyi

    2018-02-01

    To develop and validate a radiomics prediction model for individualized prediction of perineural invasion (PNI) in colorectal cancer (CRC). After computed tomography (CT) radiomics features extraction, a radiomics signature was constructed in derivation cohort (346 CRC patients). A prediction model was developed to integrate the radiomics signature and clinical candidate predictors [age, sex, tumor location, and carcinoembryonic antigen (CEA) level]. Apparent prediction performance was assessed. After internal validation, independent temporal validation (separate from the cohort used to build the model) was then conducted in 217 CRC patients. The final model was converted to an easy-to-use nomogram. The developed radiomics nomogram that integrated the radiomics signature and CEA level showed good calibration and discrimination performance [Harrell's concordance index (c-index): 0.817; 95% confidence interval (95% CI): 0.811-0.823]. Application of the nomogram in validation cohort gave a comparable calibration and discrimination (c-index: 0.803; 95% CI: 0.794-0.812). Integrating the radiomics signature and CEA level into a radiomics prediction model enables easy and effective risk assessment of PNI in CRC. This stratification of patients according to their PNI status may provide a basis for individualized auxiliary treatment.

  14. Improving Saliency Models by Predicting Human Fixation Patches

    KAUST Repository

    Dubey, Rachit

    2015-04-16

    There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.

  15. Predicting Academic Performance Based on Students' Blog and Microblog Posts

    NARCIS (Netherlands)

    Dascalu, Mihai; Popescu, Elvira; Becheru, Alexandru; Crossley, Scott; Trausan-Matu, Stefan

    2016-01-01

    This study investigates the degree to which textual complexity indices applied on students’ online contributions, corroborated with a longitudinal analysis performed on their weekly posts, predict academic performance. The source of student writing consists of blog and microblog posts, created in

  16. Evaluation of models in performance assessment

    International Nuclear Information System (INIS)

    Dormuth, K.W.

    1993-01-01

    The reliability of models used for performance assessment for high-level waste repositories is a key factor in making decisions regarding the management of high-level waste. Model reliability may be viewed as a measure of the confidence that regulators and others have in the use of these models to provide information for decision making. The degree of reliability required for the models will increase as implementation of disposal proceeds and decisions become increasingly important to safety. Evaluation of the models by using observations of real systems provides information that assists the assessment analysts and reviewers in establishing confidence in the conclusions reached in the assessment. A continuing process of model calibration, evaluation, and refinement should lead to increasing reliability of models as implementation proceeds. However, uncertainty in the model predictions cannot be eliminated, so decisions will always be made under some uncertainty. Examples from the Canadian program illustrate the process of model evaluation using observations of real systems and its relationship to performance assessment. 21 refs., 2 figs

  17. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    Science.gov (United States)

    Curtis, Gary P.; Lu, Dan; Ye, Ming

    2015-01-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  18. Determining the prediction limits of models and classifiers with applications for disruption prediction in JET

    Science.gov (United States)

    Murari, A.; Peluso, E.; Vega, J.; Gelfusa, M.; Lungaroni, M.; Gaudio, P.; Martínez, F. J.; Contributors, JET

    2017-01-01

    Understanding the many aspects of tokamak physics requires the development of quite sophisticated models. Moreover, in the operation of the devices, prediction of the future evolution of discharges can be of crucial importance, particularly in the case of the prediction of disruptions, which can cause serious damage to various parts of the machine. The determination of the limits of predictability is therefore an important issue for modelling, classifying and forecasting. In all these cases, once a certain level of performance has been reached, the question typically arises as to whether all the information available in the data has been exploited, or whether there are still margins for improvement of the tools being developed. In this paper, a theoretical information approach is proposed to address this issue. The excellent properties of the developed indicator, called the prediction factor (PF), have been proved with the help of a series of numerical tests. Its application to some typical behaviour relating to macroscopic instabilities in tokamaks has shown very positive results. The prediction factor has also been used to assess the performance of disruption predictors running in real time in the JET system, including the one systematically deployed in the feedback loop for mitigation purposes. The main conclusion is that the most advanced predictors basically exploit all the information contained in the locked mode signal on which they are based. Therefore, qualitative improvements in disruption prediction performance in JET would need the processing of additional signals, probably profiles.

  19. Data Quality Enhanced Prediction Model for Massive Plant Data

    Energy Technology Data Exchange (ETDEWEB)

    Park, Moon-Ghu [Nuclear Engr. Sejong Univ., Seoul (Korea, Republic of); Kang, Seong-Ki [Monitoring and Diagnosis, Suwon (Korea, Republic of); Shin, Hajin [Saint Paul Preparatory Seoul, Seoul (Korea, Republic of)

    2016-10-15

    This paper introduces an integrated signal preconditioning and model prediction mainly by kernel functions. The performance and benefits of the methods are demonstrated by a case study with measurement data from a power plant and its components transient data. The developed methods will be applied as a part of monitoring massive or big data platform where human experts cannot detect the fault behaviors due to too large size of the measurements. Recent extensive efforts for on-line monitoring implementation insists that a big surprise in the modeling for predicting process variables was the extent of data quality problems in measurement data especially for data-driven modeling. Bad data for training will be learned as normal and can make significant degrade in prediction performance. For this reason, the quantity and quality of measurement data in modeling phase need special care. Bad quality data must be removed from training sets to the bad data considered as normal system behavior. This paper presented an integrated structure of supervisory system for monitoring the plants or sensors performance. The quality of the data-driven model is improved with a bilateral kernel filter for preprocessing of the noisy data. The prediction module is also based on kernel regression having the same basis with noise filter. The model structure is optimized by a grouping process with nonlinear Hoeffding correlation function.

  20. Data Quality Enhanced Prediction Model for Massive Plant Data

    International Nuclear Information System (INIS)

    Park, Moon-Ghu; Kang, Seong-Ki; Shin, Hajin

    2016-01-01

    This paper introduces an integrated signal preconditioning and model prediction mainly by kernel functions. The performance and benefits of the methods are demonstrated by a case study with measurement data from a power plant and its components transient data. The developed methods will be applied as a part of monitoring massive or big data platform where human experts cannot detect the fault behaviors due to too large size of the measurements. Recent extensive efforts for on-line monitoring implementation insists that a big surprise in the modeling for predicting process variables was the extent of data quality problems in measurement data especially for data-driven modeling. Bad data for training will be learned as normal and can make significant degrade in prediction performance. For this reason, the quantity and quality of measurement data in modeling phase need special care. Bad quality data must be removed from training sets to the bad data considered as normal system behavior. This paper presented an integrated structure of supervisory system for monitoring the plants or sensors performance. The quality of the data-driven model is improved with a bilateral kernel filter for preprocessing of the noisy data. The prediction module is also based on kernel regression having the same basis with noise filter. The model structure is optimized by a grouping process with nonlinear Hoeffding correlation function

  1. New Temperature-based Models for Predicting Global Solar Radiation

    International Nuclear Information System (INIS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.

    2016-01-01

    Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for

  2. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  3. The prediction of swimming performance in competition from behavioral information.

    Science.gov (United States)

    Rushall, B S; Leet, D

    1979-06-01

    The swimming performances of the Canadian Team at the 1976 Olympic Games were categorized as being improved or worse than previous best times in the events contested. The two groups had been previously assessed on the Psychological Inventories for Competitive Swimmers. A stepwise multiple-discriminant analysis of the inventory responses revealed that 13 test questions produced a perfect discrimination of group membership. The resultant discriminant functions for predicting performance classification were applied to the test responses of 157 swimmers at the 1977 Canadian Winter National Swimming Championships. Using the same performance classification criteria the accuracy of prediction was not better than chance in three of four sex by performance classifications. This yielded a failure to locate a set of behavioral factors which determine swimming performance improvements in elite competitive circumstances. The possibility of sets of factors which do not discriminate between performances in similar environments or between similar groups of swimmers was raised.

  4. Individualized performance prediction during total sleep deprivation: accounting for trait vulnerability to sleep loss.

    Science.gov (United States)

    Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Thorsley, David; Wesensten, Nancy J; Balkin, Thomas J; Reifman, Jaques

    2012-01-01

    Individual differences in vulnerability to sleep loss can be considerable, and thus, recent efforts have focused on developing individualized models for predicting the effects of sleep loss on performance. Individualized models constructed using a Bayesian formulation, which combines an individual's available performance data with a priori performance predictions from a group-average model, typically need at least 40 h of individual data before showing significant improvement over the group-average model predictions. Here, we improve upon the basic Bayesian formulation for developing individualized models by observing that individuals may be classified into three sleep-loss phenotypes: resilient, average, and vulnerable. For each phenotype, we developed a phenotype-specific group-average model and used these models to identify each individual's phenotype. We then used the phenotype-specific models within the Bayesian formulation to make individualized predictions. Results on psychomotor vigilance test data from 48 individuals indicated that, on average, ∼85% of individual phenotypes were accurately identified within 30 h of wakefulness. The percentage improvement of the proposed approach in 10-h-ahead predictions was 16% for resilient subjects and 6% for vulnerable subjects. The trade-off for these improvements was a slight decrease in prediction accuracy for average subjects.

  5. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior......One of the major challenges with the increase in wind power generation is the uncertain nature of wind speed. So far the uncertainty about wind speed has been presented through probability distributions. Also the existing models that consider the uncertainty of the wind speed primarily view...

  6. Prediction of Gas Injection Performance for Heterogeneous Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Blunt, Michael J.; Orr, Franklin M.

    1999-05-26

    This report describes research carried out in the Department of Petroleum Engineering at Stanford University from September 1996 - September 1997 under the first year of a three-year Department of Energy grant on the Prediction of Gas Injection Performance for Heterogeneous Reservoirs. The research effort is an integrated study of the factors affecting gas injection, from the pore scale to the field scale, and involves theoretical analysis, laboratory experiments and numerical simulation. The original proposal described research in four main areas; (1) Pore scale modeling of three phase flow in porous media; (2) Laboratory experiments and analysis of factors influencing gas injection performance at the core scale with an emphasis on the fundamentals of three phase flow; (3) Benchmark simulations of gas injection at the field scale; and (4) Development of streamline-based reservoir simulator. Each stage of the research is planned to provide input and insight into the next stage, such that at the end we should have an integrated understanding of the key factors affecting field scale displacements.

  7. PREDICTING THERMAL PERFORMANCE OF ROOFING SYSTEMS IN SURABAYA

    Directory of Open Access Journals (Sweden)

    MINTOROGO Danny Santoso

    2015-07-01

    Full Text Available Traditional roofing systems in the developing country likes Indonesia are still be dominated by the 30o, 45o, and more pitched angle roofs; the roofing cover materials are widely used to traditional clay roof tiles, then modern concrete roof tiles, and ceramic roof tiles. In the 90’s decay, shop houses are prosperous built with flat concrete roofs dominant. Green roofs and roof ponds are almost rarely built to meet the sustainable environmental issues. Some tested various roof systems in Surabaya were carried out to observe the roof thermal performances. Mathematical equation model from three references are also performed in order to compare with the real project tested. Calculated with equation (Kabre et al., the 30o pitched concrete-roof-tile, 30o clay-roof-tile, 45o pitched concrete-roof-tile are the worst thermal heat flux coming to room respectively. In contrast, the bare soil concrete roof and roof pond system are the least heat flux streamed onto room. Based on predicted calculation without insulation and cross-ventilation attic space, the roof pond and bare soil concrete roof (greenery roof are the appropriate roof systems for the Surabaya’s climate; meanwhile the most un-recommended roof is pitched 30o or 45o angle with concrete-roof tiles roofing systems.

  8. Prediction of Gas Injection Performance for Heterogeneous Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Blunt, Martin J.; Orr, Franklin M.

    1999-05-17

    This report describes research carried out in the Department of Petroleum Engineering at Stanford University from September 1997 - September 1998 under the second year of a three-year grant from the Department of Energy on the "Prediction of Gas Injection Performance for Heterogeneous Reservoirs." The research effort is an integrated study of the factors affecting gas injection, from the pore scale to the field scale, and involves theoretical analysis, laboratory experiments, and numerical simulation. The original proposal described research in four areas: (1) Pore scale modeling of three phase flow in porous media; (2) Laboratory experiments and analysis of factors influencing gas injection performance at the core scale with an emphasis on the fundamentals of three phase flow; (3) Benchmark simulations of gas injection at the field scale; and (4) Development of streamline-based reservoir simulator. Each state of the research is planned to provide input and insight into the next stage, such that at the end we should have an integrated understanding of the key factors affecting field scale displacements.

  9. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  10. Modelling of physical properties - databases, uncertainties and predictive power

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Physical and thermodynamic property in the form of raw data or estimated values for pure compounds and mixtures are important pre-requisites for performing tasks such as, process design, simulation and optimization; computer aided molecular/mixture (product) design; and, product-process analysis...... in the estimated/predicted property values, how to assess the quality and reliability of the estimated/predicted property values? The paper will review a class of models for prediction of physical and thermodynamic properties of organic chemicals and their mixtures based on the combined group contribution – atom...

  11. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  12. Predicting Power Outages Using Multi-Model Ensemble Forecasts

    Science.gov (United States)

    Cerrai, D.; Anagnostou, E. N.; Yang, J.; Astitha, M.

    2017-12-01

    Power outages affect every year millions of people in the United States, affecting the economy and conditioning the everyday life. An Outage Prediction Model (OPM) has been developed at the University of Connecticut for helping utilities to quickly restore outages and to limit their adverse consequences on the population. The OPM, operational since 2015, combines several non-parametric machine learning (ML) models that use historical weather storm simulations and high-resolution weather forecasts, satellite remote sensing data, and infrastructure and land cover data to predict the number and spatial distribution of power outages. A new methodology, developed for improving the outage model performances by combining weather- and soil-related variables using three different weather models (WRF 3.7, WRF 3.8 and RAMS/ICLAMS), will be presented in this study. First, we will present a performance evaluation of each model variable, by comparing historical weather analyses with station data or reanalysis over the entire storm data set. Hence, each variable of the new outage model version is extracted from the best performing weather model for that variable, and sensitivity tests are performed for investigating the most efficient variable combination for outage prediction purposes. Despite that the final variables combination is extracted from different weather models, this ensemble based on multi-weather forcing and multi-statistical model power outage prediction outperforms the currently operational OPM version that is based on a single weather forcing variable (WRF 3.7), because each model component is the closest to the actual atmospheric state.

  13. Predictive modeling in homogeneous catalysis: a tutorial

    NARCIS (Netherlands)

    Maldonado, A.G.; Rothenberg, G.

    2010-01-01

    Predictive modeling has become a practical research tool in homogeneous catalysis. It can help to pinpoint ‘good regions’ in the catalyst space, narrowing the search for the optimal catalyst for a given reaction. Just like any other new idea, in silico catalyst optimization is accepted by some

  14. Feedback model predictive control by randomized algorithms

    NARCIS (Netherlands)

    Batina, Ivo; Stoorvogel, Antonie Arij; Weiland, Siep

    2001-01-01

    In this paper we present a further development of an algorithm for stochastic disturbance rejection in model predictive control with input constraints based on randomized algorithms. The algorithm presented in our work can solve the problem of stochastic disturbance rejection approximately but with

  15. A Robustly Stabilizing Model Predictive Control Algorithm

    Science.gov (United States)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  16. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...

  17. Mathematical model for dissolved oxygen prediction in Cirata ...

    African Journals Online (AJOL)

    Cirata reservoir is one of the reservoirs which suffer eutrophication with an indication of rapid growth of water hyacinth and mass fish deaths as a result of lack of oxygen. This paper presents the implementation and performance of mathematical model to predict theconcentration of dissolved oxygen in Cirata Reservoir, West ...

  18. Active diagnosis of hybrid systems - A model predictive approach

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Ravn, Anders P.; Izadi-Zamanabadi, Roozbeh

    2009-01-01

    outputs constrained by tolerable performance requirements. As in standard model predictive control, the first element of the optimal input is applied to the system and the whole procedure is repeated until the fault is detected by a passive diagnoser. It is demonstrated how the generated excitation signal...

  19. Model Predictive Control for Dynamic Unreliable Resource Allocation

    National Research Council Canada - National Science Library

    Castanon, David

    2002-01-01

    .... The approximation is used in a model predictive control (MPC) algorithm. For single resource problems, the MPC algorithm completes over 98 percent of the task value completed by an optimal dynamic programming algorithm in over 1,000 randomly generated problems. On average, it achieves 99.5 percent of the optimal performance while requiring over 6 orders of magnitude less comnutation.

  20. Stochastic models for predicting pitting corrosion damage of HLRW containers

    International Nuclear Information System (INIS)

    Henshall, G.A.

    1991-10-01

    Stochastic models for predicting aqueous pitting corrosion damage of high-level radioactive-waste containers are described. These models could be used to predict the time required for the first pit to penetrate a container and the increase in the number of breaches at later times, both of which would be useful in the repository system performance analysis. Monte Carlo implementations of the stochastic models are described, and predictions of induction time, survival probability and pit depth distributions are presented. These results suggest that the pit nucleation probability decreases with exposure time and that pit growth may be a stochastic process. The advantages and disadvantages of the stochastic approach, methods for modeling the effects of environment, and plans for future work are discussed