WorldWideScience

Sample records for modeling work predicts

  1. Predicting Freeway Work Zone Delays and Costs with a Hybrid Machine-Learning Model

    Directory of Open Access Journals (Sweden)

    Bo Du

    2017-01-01

    Full Text Available A hybrid machine-learning model, integrating an artificial neural network (ANN and a support vector machine (SVM model, is developed to predict spatiotemporal delays, subject to road geometry, number of lane closures, and work zone duration in different periods of a day and in the days of a week. The model is very user friendly, allowing the least inputs from the users. With that the delays caused by a work zone on any location of a New Jersey freeway can be predicted. To this end, tremendous amounts of data from different sources were collected to establish the relationship between the model inputs and outputs. A comparative analysis was conducted, and results indicate that the proposed model outperforms others in terms of the least root mean square error (RMSE. The proposed hybrid model can be used to calculate contractor penalty in terms of cost overruns as well as incentive reward schedule in case of early work competition. Additionally, it can assist work zone planners in determining the best start and end times of a work zone for developing and evaluating traffic mitigation and management plans.

  2. Return to Work After Lumbar Microdiscectomy - Personalizing Approach Through Predictive Modeling.

    Science.gov (United States)

    Papić, Monika; Brdar, Sanja; Papić, Vladimir; Lončar-Turukalo, Tatjana

    2016-01-01

    Lumbar disc herniation (LDH) is the most common disease among working population requiring surgical intervention. This study aims to predict the return to work after operative treatment of LDH based on the observational study including 153 patients. The classification problem was approached using decision trees (DT), support vector machines (SVM) and multilayer perception (MLP) combined with RELIEF algorithm for feature selection. MLP provided best recall of 0.86 for the class of patients not returning to work, which combined with the selected features enables early identification and personalized targeted interventions towards subjects at risk of prolonged disability. The predictive modeling indicated at the most decisive risk factors in prolongation of work absence: psychosocial factors, mobility of the spine and structural changes of facet joints and professional factors including standing, sitting and microclimate.

  3. Prediction of Critical Power and W' in Hypoxia: Application to Work-Balance Modelling.

    Science.gov (United States)

    Townsend, Nathan E; Nichols, David S; Skiba, Philip F; Racinais, Sebastien; Périard, Julien D

    2017-01-01

    Purpose: Develop a prediction equation for critical power (CP) and work above CP (W') in hypoxia for use in the work-balance ([Formula: see text]) model. Methods: Nine trained male cyclists completed cycling time trials (TT; 12, 7, and 3 min) to determine CP and W' at five altitudes (250, 1,250, 2,250, 3,250, and 4,250 m). Least squares regression was used to predict CP and W' at altitude. A high-intensity intermittent test (HIIT) was performed at 250 and 2,250 m. Actual and predicted CP and W' were used to compute W' during HIIT using differential ([Formula: see text]) and integral ([Formula: see text]) forms of the [Formula: see text] model. Results: CP decreased at altitude ( P equations for CP and W' developed in this study are suitable for use with the [Formula: see text] model in acute hypoxia. This enables the application of [Formula: see text] modelling to training prescription and competition analysis at altitude.

  4. Predicting Sustainable Work Behavior

    DEFF Research Database (Denmark)

    Hald, Kim Sundtoft

    2013-01-01

    Sustainable work behavior is an important issue for operations managers – it has implications for most outcomes of OM. This research explores the antecedents of sustainable work behavior. It revisits and extends the sociotechnical model developed by Brown et al. (2000) on predicting safe behavior...

  5. Predicting employees' well-being using work-family conflict and job strain models.

    Science.gov (United States)

    Karimi, Leila; Karimi, Hamidreza; Nouri, Aboulghassem

    2011-04-01

    The present study examined the effects of two models of work–family conflict (WFC) and job-strain on the job-related and context-free well-being of employees. The participants of the study consisted of Iranian employees from a variety of organizations. The effects of three dimensions of the job-strain model and six forms of WFC on affective well-being were assessed. The results of hierarchical multiple regression analysis revealed that the number of working hours, strain-based work interfering with family life (WIF) along with job characteristic variables (i.e. supervisory support, job demands and job control) all make a significant contribution to the prediction of job-related well-being. On the other hand, strain-based WIF and family interfering with work (FIW) significantly predicted context-free well-being. Implications are drawn and recommendations made regarding future research and interventions in the workplace.

  6. Cross-national validation of prognostic models predicting sickness absence and the added value of work environment variables.

    Science.gov (United States)

    Roelen, Corné A M; Stapelfeldt, Christina M; Heymans, Martijn W; van Rhenen, Willem; Labriola, Merete; Nielsen, Claus V; Bültmann, Ute; Jensen, Chris

    2015-06-01

    To validate Dutch prognostic models including age, self-rated health and prior sickness absence (SA) for ability to predict high SA in Danish eldercare. The added value of work environment variables to the models' risk discrimination was also investigated. 2,562 municipal eldercare workers (95% women) participated in the Working in Eldercare Survey. Predictor variables were measured by questionnaire at baseline in 2005. Prognostic models were validated for predictions of high (≥30) SA days and high (≥3) SA episodes retrieved from employer records during 1-year follow-up. The accuracy of predictions was assessed by calibration graphs and the ability of the models to discriminate between high- and low-risk workers was investigated by ROC-analysis. The added value of work environment variables was measured with Integrated Discrimination Improvement (IDI). 1,930 workers had complete data for analysis. The models underestimated the risk of high SA in eldercare workers and the SA episodes model had to be re-calibrated to the Danish data. Discrimination was practically useful for the re-calibrated SA episodes model, but not the SA days model. Physical workload improved the SA days model (IDI = 0.40; 95% CI 0.19-0.60) and psychosocial work factors, particularly the quality of leadership (IDI = 0.70; 95% CI 053-0.86) improved the SA episodes model. The prognostic model predicting high SA days showed poor performance even after physical workload was added. The prognostic model predicting high SA episodes could be used to identify high-risk workers, especially when psychosocial work factors are added as predictor variables.

  7. Aespoe Pillar Stability Experiment. Summary of preparatory work and predictive modelling

    International Nuclear Information System (INIS)

    Andersson, J. Christer

    2004-11-01

    The Aespoe Pillar Stability Experiment, APSE, is a large scale rock mechanics experiment for research of the spalling process and the possibility for numerical modelling of it. The experiment can be summarized in three objectives: Demonstrate the current capability to predict spalling in a fractured rock mass; Demonstrate the effect of backfill (confining pressure) on the rock mass response; and Comparison of 2D and 3D mechanical and thermal predicting capabilities. This report is a summary of the works that has been performed in the experiment prior to the heating of the rock mass. The major activities that have been performed and are discussed herein are: 1) The geology of the experiment drift in general and the experiment volume in particular. 2) The design process of the experiment and thoughts behind some of the important decisions. 3) The monitoring programme and the supporting constructions for the instruments. 4) The numerical modelling, approaches taken and a summary of the predictions. In the end of the report there is a comparison of the results from the different models. Included is also a comparison of the time needed for building, realizing and make changes in the different models

  8. Work-principle model for predicting toxic fumes of nonideal explosives

    Energy Technology Data Exchange (ETDEWEB)

    Wieland, Michael S. [National Institute of Occupational Safety and Health, Pittsburgh Research Center, P.O. Box 18070, Pittsburgh, PA 15236-0070 (United States)

    2004-08-01

    The work-principle from thermodynamics was used to formulate a model for predicting toxic fumes from mining explosives in underground chamber tests, where rapid turbulent combustion within the surrounding air noticeably changes the resulting concentrations. Two model constants were required to help characterize the reaction zone undergoing rapid chemical transformations in conjunction with heat transfer and work output: a stoichiometry mixing fraction and a reaction-quenching temperature. Rudimentary theory with an unsteady uniform concentration gradient was taken to characterize the combustion zone, yielding 75% for the mixing fraction. Four quenching temperature trends were resolved and compared to test results of ammonium nitrate compositions with different fuel-oil percentages (ANFO). The quenching temperature 2345 K was the optimum choice for fitting the two major components of fume toxicity: carbon monoxide (CO) and total nitrogen oxides (NO{sub X}). The resulting two-constant model was used to generate comparisons for test results of ANFO compositions with additives. Though respectable fits were usually found, charge formulations which reacted weakly could not be resolved numerically. The work-principle model yields toxic concentrations for a range of charge formulations, making it a useful tool for investigating the potential hazard of released fumes and reducing the risk of unwanted incidents. (Abstract Copyright [2004], Wiley Periodicals, Inc.)

  9. Applying mathematical models to predict resident physician performance and alertness on traditional and novel work schedules.

    Science.gov (United States)

    Klerman, Elizabeth B; Beckett, Scott A; Landrigan, Christopher P

    2016-09-13

    In 2011 the U.S. Accreditation Council for Graduate Medical Education began limiting first year resident physicians (interns) to shifts of ≤16 consecutive hours. Controversy persists regarding the effectiveness of this policy for reducing errors and accidents while promoting education and patient care. Using a mathematical model of the effects of circadian rhythms and length of time awake on objective performance and subjective alertness, we quantitatively compared predictions for traditional intern schedules to those that limit work to ≤ 16 consecutive hours. We simulated two traditional schedules and three novel schedules using the mathematical model. The traditional schedules had extended duration work shifts (≥24 h) with overnight work shifts every second shift (including every third night, Q3) or every third shift (including every fourth night, Q4) night; the novel schedules had two different cross-cover (XC) night team schedules (XC-V1 and XC-V2) and a Rapid Cycle Rotation (RCR) schedule. Predicted objective performance and subjective alertness for each work shift were computed for each individual's schedule within a team and then combined for the team as a whole. Our primary outcome was the amount of time within a work shift during which a team's model-predicted objective performance and subjective alertness were lower than that expected after 16 or 24 h of continuous wake in an otherwise rested individual. The model predicted fewer hours with poor performance and alertness, especially during night-time work hours, for all three novel schedules than for either the traditional Q3 or Q4 schedules. Three proposed schedules that eliminate extended shifts may improve performance and alertness compared with traditional Q3 or Q4 schedules. Predicted times of worse performance and alertness were at night, which is also a time when supervision of trainees is lower. Mathematical modeling provides a quantitative comparison approach with potential to aid

  10. [Application of predictive model to estimate concentrations of chemical substances in the work environment].

    Science.gov (United States)

    Kupczewska-Dobecka, Małgorzata; Czerczak, Sławomir; Jakubowski, Marek; Maciaszek, Piotr; Janasik, Beata

    2010-01-01

    Based on the Estimation and Assessment of Substance Exposure (EASE) predictive model implemented into the European Union System for the Evaluation of Substances (EUSES 2.1.), the exposure to three chosen organic solvents: toluene, ethyl acetate and acetone was estimated and compared with the results of measurements in workplaces. Prior to validation, the EASE model was pretested using three exposure scenarios. The scenarios differed in the decision tree of pattern of use. Five substances were chosen for the test: 1,4-dioxane tert-methyl-butyl ether, diethylamine, 1,1,1-trichloroethane and bisphenol A. After testing the EASE model, the next step was the validation by estimating the exposure level and comparing it with the results of measurements in the workplace. We used the results of measurements of toluene, ethyl acetate and acetone concentrations in the work environment of a paint and lacquer factory, a shoe factory and a refinery. Three types of exposure scenarios, adaptable to the description of working conditions were chosen to estimate inhalation exposure. Comparison of calculated exposure to toluene, ethyl acetate and acetone with measurements in workplaces showed that model predictions are comparable with the measurement results. Only for low concentration ranges, the measured concentrations were higher than those predicted. EASE is a clear, consistent system, which can be successfully used as an additional component of inhalation exposure estimation. If the measurement data are available, they should be preferred to values estimated from models. In addition to inhalation exposure estimation, the EASE model makes it possible not only to assess exposure-related risk but also to predict workers' dermal exposure.

  11. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  12. The Level of Quality of Work Life to Predict Work Alienation

    Science.gov (United States)

    Erdem, Mustafa

    2014-01-01

    The current research aims to determine the level of elementary school teachers' quality of work life (QWL) to predict work alienation. The study was designed using the relational survey model. The research population consisted of 1096 teachers employed at 25 elementary schools within the city of Van in the academic year 2010- 2011, and 346…

  13. Prediction of ttt curves of cold working tool steels using support vector machine model

    Science.gov (United States)

    Pillai, Nandakumar; Karthikeyan, R., Dr.

    2018-04-01

    The cold working tool steels are of high carbon steels with metallic alloy additions which impart higher hardenability, abrasion resistance and less distortion in quenching. The microstructure changes occurring in tool steel during heat treatment is of very much importance as the final properties of the steel depends upon these changes occurred during the process. In order to obtain the desired performance the alloy constituents and its ratio plays a vital role as the steel transformation itself is complex in nature and depends very much upon the time and temperature. The proper treatment can deliver satisfactory results, at the same time process deviation can completely spoil the results. So knowing time temperature transformation (TTT) of phases is very critical which varies for each type depending upon its constituents and proportion range. To obtain adequate post heat treatment properties the percentage of retained austenite should be lower and metallic carbides obtained should be fine in nature. Support vector machine is a computational model which can learn from the observed data and use these to predict or solve using mathematical model. Back propagation feedback network will be created and trained for further solutions. The points on the TTT curve for the known transformations curves are used to plot the curves for different materials. These data will be trained to predict TTT curves for other steels having similar alloying constituents but with different proportion range. The proposed methodology can be used for prediction of TTT curves for cold working steels and can be used for prediction of phases for different heat treatment methods.

  14. Cross-National Validation of Prognostic Models Predicting Sickness Absence and the Added Value of Work Environment Variables

    NARCIS (Netherlands)

    Roelen, Corne A. M.; Stapelfeldt, Christina M.; Heymans, Martijn W.; van Rhenen, Willem; Labriola, Merete; Nielsen, Claus V.; Bultmann, Ute; Jensen, Chris

    Purpose To validate Dutch prognostic models including age, self-rated health and prior sickness absence (SA) for ability to predict high SA in Danish eldercare. The added value of work environment variables to the models' risk discrimination was also investigated. Methods 2,562 municipal eldercare

  15. Prediction of work piece geometry in electrochemical cavity sinking

    Energy Technology Data Exchange (ETDEWEB)

    Riggs, J B; Muller, R H; Tobias, C W

    1981-01-01

    A computer-implemented model for predicting ECM work piece geometry has been developed and experimentally verified with a commercial ECM machine for cavity sinking in copper and 302-stainless steel with 2N KNO/sub 3/ electrolyte. Constant tool piece feed rates of 7-10 x 10/sup -4/ cm/s, and applied voltages of 11-25 V were used. The model predicts the dependence of work piece geometry on operating conditions and on the electrochemical and physical properties of the metal-electrolyte pair. Comparison of eight equilibrium and six unsteady state experimental cavity profiles in copper showed satisfactory agreement with predictions, as did five equilibrium profiles for cavity sinking in 302-stainless steel.

  16. Prediction of work metabolism from heart rate measurements in forest work: some practical methodological issues.

    Science.gov (United States)

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Auger, Isabelle; Leone, Mario

    2015-01-01

    Individual heart rate (HR) to workload relationships were determined using 93 submaximal step-tests administered to 26 healthy participants attending physical activities in a university training centre (laboratory study) and 41 experienced forest workers (field study). Predicted maximum aerobic capacity (MAC) was compared to measured MAC from a maximal treadmill test (laboratory study) to test the effect of two age-predicted maximum HR Equations (220-age and 207-0.7 × age) and two clothing insulation levels (0.4 and 0.91 clo) during the step-test. Work metabolism (WM) estimated from forest work HR was compared against concurrent work V̇O2 measurements while taking into account the HR thermal component. Results show that MAC and WM can be accurately predicted from work HR measurements and simple regression models developed in this study (1% group mean prediction bias and up to 25% expected prediction bias for a single individual). Clothing insulation had no impact on predicted MAC nor age-predicted maximum HR equations. Practitioner summary: This study sheds light on four practical methodological issues faced by practitioners regarding the use of HR methodology to assess WM in actual work environments. More specifically, the effect of wearing work clothes and the use of two different maximum HR prediction equations on the ability of a submaximal step-test to assess MAC are examined, as well as the accuracy of using an individual's step-test HR to workload relationship to predict WM from HR data collected during actual work in the presence of thermal stress.

  17. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  18. Study on the Influence of the Work Hardening Models Constitutive Parameters Identification in the Springback Prediction

    International Nuclear Information System (INIS)

    Oliveira, M.C.; Menezes, L. F.; Alves, J.L.; Chaparro, B.M.

    2005-01-01

    The main goal of this work is to determine the influence of the work hardening model in the numerical prediction of springback. This study will be performed according with the specifications of the first phase of the 'Benchmark 3' of the Numisheet'2005 Conference: the 'Channel Draw'. Several work hardening constitutive models are used in order to allow a better description of the different material mechanical behavior. Two are classical pure isotropic hardening models described by a power law (Swift) or a Voce type saturation equation. Those two models were also combined with a non-linear (Lemaitre and Chaboche) kinematic hardening rule. The final one is the Teodosiu microstructural hardening model. The study is performed for two commonly used steels of the automotive industry: mild (DC06) and dual phase (DP600) steels. The mechanical characterization, as well as the constitutive parameters identification of each work hardening models, was performed by LPMTM, based on an appropriate set of experimental data such as uniaxial tensile tests, monotonic and Bauschinger simple shear tests and orthogonal strain path tests, all at various orientations with respect to the rolling direction. All the simulations were carried out with the CEMUC's home code DD3IMP (contraction of 'Deep Drawing 3-D IMPlicit code')

  19. Cross-National Validation of Prognostic Models Predicting Sickness Absence and the Added Value of Work Environment Variables

    NARCIS (Netherlands)

    Roelen, C.A.M.; Stapelfeldt, C.M.; Heijmans, M.W.; van Rhenen, W.; Labriola, M.; Nielsen, C.V.; Bultmann, U.; Jensen, C.

    2015-01-01

    Purpose To validate Dutch prognostic models including age, self-rated health and prior sickness absence (SA) for ability to predict high SA in Danish eldercare. The added value of work environment variables to the models’ risk discrimination was also investigated. Methods 2,562 municipal eldercare

  20. The Job Demands-Resources model as predictor of work identity and work engagement: A comparative analysis

    Directory of Open Access Journals (Sweden)

    Roslyn De Braine

    2011-05-01

    Research purpose: This study explored possible differences in the Job Demands-Resources model (JD-R as predictor of overall work engagement, dedication only and work-based identity, through comparative predictive analyses. Motivation for the study: This study may shed light on the dedication component of work engagement. Currently no literature indicates that the JD-R model has been used to predict work-based identity. Research design: A census-based survey was conducted amongst a target population of 23134 employees that yielded a sample of 2429 (a response rate of about 10.5%. The Job Demands- Resources scale (JDRS was used to measure job demands and job resources. A work-based identity scale was developed for this study. Work engagement was studied with the Utrecht Work Engagement Scale (UWES. Factor and reliability analyses were conducted on the scales and general multiple regression models were used in the predictive analyses. Main findings: The JD-R model yielded a greater amount of variance in dedication than in work engagement. It, however, yielded the greatest amount of variance in work-based identity, with job resources being its strongest predictor. Practical/managerial implications: Identification and work engagement levels can be improved by managing job resources and demands. Contribution/value-add: This study builds on the literature of the JD-R model by showing that it can be used to predict work-based identity.

  1. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  2. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  3. Predictive Modeling of a Paradigm Mechanical Cooling Tower Model: II. Optimal Best-Estimate Results with Reduced Predicted Uncertainties

    Directory of Open Access Journals (Sweden)

    Ruixian Fang

    2016-09-01

    Full Text Available This work uses the adjoint sensitivity model of the counter-flow cooling tower derived in the accompanying PART I to obtain the expressions and relative numerical rankings of the sensitivities, to all model parameters, of the following model responses: (i outlet air temperature; (ii outlet water temperature; (iii outlet water mass flow rate; and (iv air outlet relative humidity. These sensitivities are subsequently used within the “predictive modeling for coupled multi-physics systems” (PM_CMPS methodology to obtain explicit formulas for the predicted optimal nominal values for the model responses and parameters, along with reduced predicted standard deviations for the predicted model parameters and responses. These explicit formulas embody the assimilation of experimental data and the “calibration” of the model’s parameters. The results presented in this work demonstrate that the PM_CMPS methodology reduces the predicted standard deviations to values that are smaller than either the computed or the experimentally measured ones, even for responses (e.g., the outlet water flow rate for which no measurements are available. These improvements stem from the global characteristics of the PM_CMPS methodology, which combines all of the available information simultaneously in phase-space, as opposed to combining it sequentially, as in current data assimilation procedures.

  4. Perceived versus used workplace flexibility in Singapore: predicting work-family fit.

    Science.gov (United States)

    Jones, Blake L; Scoville, D Phillip; Hill, E Jeffrey; Childs, Geniel; Leishman, Joan M; Nally, Kathryn S

    2008-10-01

    This study examined the relationship of 2 types of workplace flexibility to work-family fit and work, personal, and marriage-family outcomes using data (N = 1,601) representative of employed persons in Singapore. We hypothesized that perceived and used workplace flexibility would be positively related to the study variables. Results derived from structural equation modeling revealed that perceived flexibility predicted work-family fit; however, used flexibility did not. Work-family fit related positively to each work, personal, and marriage-family outcome; however, workplace flexibility only predicted work and personal outcomes. Findings suggest work-family fit may be an important facilitating factor in the interface between work and family life, relating directly to marital satisfaction and satisfaction in other family relationships. Implications of these findings are discussed. Copyright 2008 APA, all rights reserved.

  5. Frequency weighted model predictive control of wind turbine

    DEFF Research Database (Denmark)

    Klauco, Martin; Poulsen, Niels Kjølstad; Mirzaei, Mahmood

    2013-01-01

    This work is focused on applying frequency weighted model predictive control (FMPC) on three blade horizontal axis wind turbine (HAWT). A wind turbine is a very complex, non-linear system influenced by a stochastic wind speed variation. The reduced dynamics considered in this work are the rotatio...... predictive controller are presented. Statistical comparison between frequency weighted MPC, standard MPC and baseline PI controller is shown as well.......This work is focused on applying frequency weighted model predictive control (FMPC) on three blade horizontal axis wind turbine (HAWT). A wind turbine is a very complex, non-linear system influenced by a stochastic wind speed variation. The reduced dynamics considered in this work...... are the rotational degree of freedom of the rotor and the tower for-aft movement. The MPC design is based on a receding horizon policy and a linearised model of the wind turbine. Due to the change of dynamics according to wind speed, several linearisation points must be considered and the control design adjusted...

  6. Healthy work revisited: do changes in time strain predict well-being?

    Science.gov (United States)

    Moen, Phyllis; Kelly, Erin L; Lam, Jack

    2013-04-01

    Building on Karasek and Theorell (R. Karasek & T. Theorell, 1990, Healthy work: Stress, productivity, and the reconstruction of working life, New York, NY: Basic Books), we theorized and tested the relationship between time strain (work-time demands and control) and seven self-reported health outcomes. We drew on survey data from 550 employees fielded before and 6 months after the implementation of an organizational intervention, the results only work environment (ROWE) in a white-collar organization. Cross-sectional (wave 1) models showed psychological time demands and time control measures were related to health outcomes in expected directions. The ROWE intervention did not predict changes in psychological time demands by wave 2, but did predict increased time control (a sense of time adequacy and schedule control). Statistical models revealed increases in psychological time demands and time adequacy predicted changes in positive (energy, mastery, psychological well-being, self-assessed health) and negative (emotional exhaustion, somatic symptoms, psychological distress) outcomes in expected directions, net of job and home demands and covariates. This study demonstrates the value of including time strain in investigations of the health effects of job conditions. Results encourage longitudinal models of change in psychological time demands as well as time control, along with the development and testing of interventions aimed at reducing time strain in different populations of workers.

  7. Serotonergic modulation of spatial working memory: predictions from a computational network model

    Directory of Open Access Journals (Sweden)

    Maria eCano-Colino

    2013-09-01

    Full Text Available Serotonin (5-HT receptors of types 1A and 2A are massively expressed in prefrontal cortex (PFC neurons, an area associated with cognitive function. Hence, 5-HT could be effective in modulating prefrontal-dependent cognitive functions, such as spatial working memory (SWM. However, a direct association between 5-HT and SWM has proved elusive in psycho-pharmacological studies. Recently, a computational network model of the PFC microcircuit was used to explore the relationship between 5‑HT and SWM (Cano-Colino et al. 2013. This study found that both excessive and insufficient 5-HT levels lead to impaired SWM performance in the network, and it concluded that analyzing behavioral responses based on confidence reports could facilitate the experimental identification of SWM behavioral effects of 5‑HT neuromodulation. Such analyses may have confounds based on our limited understanding of metacognitive processes. Here, we extend these results by deriving three additional predictions from the model that do not rely on confidence reports. Firstly, only excessive levels of 5-HT should result in SWM deficits that increase with delay duration. Secondly, excessive 5-HT baseline concentration makes the network vulnerable to distractors at distances that were robust to distraction in control conditions, while the network still ignores distractors efficiently for low 5‑HT levels that impair SWM. Finally, 5-HT modulates neuronal memory fields in neurophysiological experiments: Neurons should be better tuned to the cued stimulus than to the behavioral report for excessive 5-HT levels, while the reverse should happen for low 5-HT concentrations. In all our simulations agonists of 5-HT1A receptors and antagonists of 5-HT2A receptors produced behavioral and physiological effects in line with global 5-HT level increases. Our model makes specific predictions to be tested experimentally and advance our understanding of the neural basis of SWM and its neuromodulation

  8. Modelling fatigue and the use of fatigue models in work settings.

    Science.gov (United States)

    Dawson, Drew; Ian Noy, Y; Härmä, Mikko; Akerstedt, Torbjorn; Belenky, Gregory

    2011-03-01

    In recent years, theoretical models of the sleep and circadian system developed in laboratory settings have been adapted to predict fatigue and, by inference, performance. This is typically done using the timing of prior sleep and waking or working hours as the primary input and the time course of the predicted variables as the primary output. The aim of these models is to provide employers, unions and regulators with quantitative information on the likely average level of fatigue, or risk, associated with a given pattern of work and sleep with the goal of better managing the risk of fatigue-related errors and accidents/incidents. The first part of this review summarises the variables known to influence workplace fatigue and draws attention to the considerable variability attributable to individual and task variables not included in current models. The second part reviews the current fatigue models described in the scientific and technical literature and classifies them according to whether they predict fatigue directly by using the timing of prior sleep and wake (one-step models) or indirectly by using work schedules to infer an average sleep-wake pattern that is then used to predict fatigue (two-step models). The third part of the review looks at the current use of fatigue models in field settings by organizations and regulators. Given their limitations it is suggested that the current generation of models may be appropriate for use as one element in a fatigue risk management system. The final section of the review looks at the future of these models and recommends a standardised approach for their use as an element of the 'defenses-in-depth' approach to fatigue risk management. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. Working Memory Load Strengthens Reward Prediction Errors.

    Science.gov (United States)

    Collins, Anne G E; Ciullo, Brittany; Frank, Michael J; Badre, David

    2017-04-19

    Reinforcement learning (RL) in simple instrumental tasks is usually modeled as a monolithic process in which reward prediction errors (RPEs) are used to update expected values of choice options. This modeling ignores the different contributions of different memory and decision-making systems thought to contribute even to simple learning. In an fMRI experiment, we investigated how working memory (WM) and incremental RL processes interact to guide human learning. WM load was manipulated by varying the number of stimuli to be learned across blocks. Behavioral results and computational modeling confirmed that learning was best explained as a mixture of two mechanisms: a fast, capacity-limited, and delay-sensitive WM process together with slower RL. Model-based analysis of fMRI data showed that striatum and lateral prefrontal cortex were sensitive to RPE, as shown previously, but, critically, these signals were reduced when the learning problem was within capacity of WM. The degree of this neural interaction related to individual differences in the use of WM to guide behavioral learning. These results indicate that the two systems do not process information independently, but rather interact during learning. SIGNIFICANCE STATEMENT Reinforcement learning (RL) theory has been remarkably productive at improving our understanding of instrumental learning as well as dopaminergic and striatal network function across many mammalian species. However, this neural network is only one contributor to human learning and other mechanisms such as prefrontal cortex working memory also play a key role. Our results also show that these other players interact with the dopaminergic RL system, interfering with its key computation of reward prediction errors. Copyright © 2017 the authors 0270-6474/17/374332-11$15.00/0.

  10. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  11. Models for predicting compressive strength and water absorption of ...

    African Journals Online (AJOL)

    This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...

  12. AN EFFICIENT PATIENT INFLOW PREDICTION MODEL FOR HOSPITAL RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Kottalanka Srikanth

    2017-07-01

    Full Text Available There has been increasing demand in improving service provisioning in hospital resources management. Hospital industries work with strict budget constraint at the same time assures quality care. To achieve quality care with budget constraint an efficient prediction model is required. Recently there has been various time series based prediction model has been proposed to manage hospital resources such ambulance monitoring, emergency care and so on. These models are not efficient as they do not consider the nature of scenario such climate condition etc. To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. This induces overhead and affects the accuracy in prediction. To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. Experiment are conducted to evaluate the performance of proposed model inter of RMSE and MAPE. The outcome shows the proposed model reduces RMSE and MAPE over existing back propagation based artificial neural network. The overall outcomes show the proposed prediction model improves the accuracy of prediction which aid in improving the quality of health care management.

  13. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2018-01-01

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Multi-model analysis in hydrological prediction

    Science.gov (United States)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  15. Comparative Analysis of Predictive Models of Pain Level from Work-Related Musculoskeletal Disorders among Sewing Machine Operators in the Garments Industry

    Directory of Open Access Journals (Sweden)

    Carlos Ignacio P. Luga

    2017-02-01

    Full Text Available The Philippine garments industry has been experiencing a roller-coaster ride during the past decades, with much competition from its Asian neighbors, especially in the wake of the ASEAN 2015 Integration. One of the areas in the industry which can be looked into and possibly improved is the concern on Work-related Musculoskeletal Disorders (WMSDs. Literatures have shown that pain from WMSDs among sewing machine operators in this industry is very prevalent and its effects on the same operators have been very costly. After identifying the risk factors which may cause pain from WMSDs, this study generated three models which would predict the said pain level. These models were analyzed, compared and the best model was identified to make the most accurate prediction of pain level. This predictive model would be helpful for management of garment firms since first, the risk factors have been identified and hence can be used as bases for proposed improvements. Second, the prediction of each operator’s pain level would allow management to assess better its employees in terms of their sewing capacity vis-à-vis the company’s production plans.

  16. PVT characterization and viscosity modeling and prediction of crude oils

    DEFF Research Database (Denmark)

    Cisneros, Eduardo Salvador P.; Dalberg, Anders; Stenby, Erling Halfdan

    2004-01-01

    In previous works, the general, one-parameter friction theory (f-theory), models have been applied to the accurate viscosity modeling of reservoir fluids. As a base, the f-theory approach requires a compositional characterization procedure for the application of an equation of state (EOS), in most...... pressure, is also presented. The combination of the mass characterization scheme presented in this work and the f-theory, can also deliver accurate viscosity modeling results. Additionally, depending on how extensive the compositional characterization is, the approach,presented in this work may also...... deliver accurate viscosity predictions. The modeling approach presented in this work can deliver accurate viscosity and density modeling and prediction results over wide ranges of reservoir conditions, including the compositional changes induced by recovery processes such as gas injection....

  17. Predicting work Performance through selection interview ratings and Psychological assessment

    Directory of Open Access Journals (Sweden)

    Liziwe Nzama

    2008-11-01

    Full Text Available The aim of the study was to establish whether selection interviews used in conjunction with psychological assessments of personality traits and cognitive functioning contribute to predicting work performance. The sample consisted of 102 managers who were appointed recently in a retail organisation. The independent variables were selection interview ratings obtained on the basis of structured competency-based interview schedules by interviewing panels, fve broad dimensions of personality defned by the Five Factor Model as measured by the 15 Factor Questionnaire (15FQ+, and cognitive processing variables (current level of work, potential level of work, and 12 processing competencies measured by the Cognitive Process Profle (CPP. Work performance was measured through annual performance ratings that focused on measurable outputs of performance objectives. Only two predictor variables correlated statistically signifcantly with the criterion variable, namely interview ratings (r = 0.31 and CPP Verbal Abstraction (r = 0.34. Following multiple regression, only these variables contributed signifcantly to predicting work performance, but only 17.8% of the variance of the criterion was accounted for.

  18. Predictive modeling of coupled multi-physics systems: I. Theory

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel

    2014-01-01

    Highlights: • We developed “predictive modeling of coupled multi-physics systems (PMCMPS)”. • PMCMPS reduces predicted uncertainties in predicted model responses and parameters. • PMCMPS treats efficiently very large coupled systems. - Abstract: This work presents an innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS).” This methodology takes into account fully the coupling terms between the systems but requires only the computational resources that would be needed to perform predictive modeling on each system separately. The PMCMPS methodology uses the maximum entropy principle to construct an optimal approximation of the unknown a priori distribution based on a priori known mean values and uncertainties characterizing the parameters and responses for both multi-physics models. This “maximum entropy”-approximate a priori distribution is combined, using Bayes’ theorem, with the “likelihood” provided by the multi-physics simulation models. Subsequently, the posterior distribution thus obtained is evaluated using the saddle-point method to obtain analytical expressions for the optimally predicted values for the multi-physics models parameters and responses along with corresponding reduced uncertainties. Noteworthy, the predictive modeling methodology for the coupled systems is constructed such that the systems can be considered sequentially rather than simultaneously, while preserving exactly the same results as if the systems were treated simultaneously. Consequently, very large coupled systems, which could perhaps exceed available computational resources if treated simultaneously, can be treated with the PMCMPS methodology presented in this work sequentially and without any loss of generality or information, requiring just the resources that would be needed if the systems were treated sequentially

  19. Driver's mental workload prediction model based on physiological indices.

    Science.gov (United States)

    Yan, Shengyuan; Tran, Cong Chi; Wei, Yingying; Habiyaremye, Jean Luc

    2017-09-15

    Developing an early warning model to predict the driver's mental workload (MWL) is critical and helpful, especially for new or less experienced drivers. The present study aims to investigate the correlation between new drivers' MWL and their work performance, regarding the number of errors. Additionally, the group method of data handling is used to establish the driver's MWL predictive model based on subjective rating (NASA task load index [NASA-TLX]) and six physiological indices. The results indicate that the NASA-TLX and the number of errors are positively correlated, and the predictive model shows the validity of the proposed model with an R 2 value of 0.745. The proposed model is expected to provide a reference value for the new drivers of their MWL by providing the physiological indices, and the driving lesson plans can be proposed to sustain an appropriate MWL as well as improve the driver's work performance.

  20. Predicting medical specialists' working (long) hours: Testing a contemporary career model

    NARCIS (Netherlands)

    Pas, B.R.; Eisinga, R.N.; Doorewaard, J.A.C.M.

    2016-01-01

    With the feminization (in numbers) of several professions, changing gender role prescriptions regarding parenthood and an increased attention for work-life balance, career theorists recently addressed the need for a more contemporary career model taking a work-home perspective. In this study, we

  1. A general phenomenological model for work function

    Science.gov (United States)

    Brodie, I.; Chou, S. H.; Yuan, H.

    2014-07-01

    A general phenomenological model is presented for obtaining the zero Kelvin work function of any crystal facet of metals and semiconductors, both clean and covered with a monolayer of electropositive atoms. It utilizes the known physical structure of the crystal and the Fermi energy of the two-dimensional electron gas assumed to form on the surface. A key parameter is the number of electrons donated to the surface electron gas per surface lattice site or adsorbed atom, which is taken to be an integer. Initially this is found by trial and later justified by examining the state of the valence electrons of the relevant atoms. In the case of adsorbed monolayers of electropositive atoms a satisfactory justification could not always be found, particularly for cesium, but a trial value always predicted work functions close to the experimental values. The model can also predict the variation of work function with temperature for clean crystal facets. The model is applied to various crystal faces of tungsten, aluminium, silver, and select metal oxides, and most demonstrate good fits compared to available experimental values.

  2. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  3. On the role of passion for work in burnout: a process model.

    Science.gov (United States)

    Vallerand, Robert J; Paquet, Yvan; Philippe, Frederick L; Charest, Julie

    2010-02-01

    The purpose of the present research was to test a model on the role of passion for work in professional burnout. This model posits that obsessive passion produces conflict between work and other life activities because the person cannot let go of the work activity. Conversely, harmonious passion is expected to prevent conflict while positively contributing to work satisfaction. Finally, conflict is expected to contribute to burnout, whereas work satisfaction should prevent its occurrence. This model was tested in 2 studies with nurses in 2 cultures. Using a cross-sectional design, Study 1 (n=97) provided support for the model with nurses from France. In Study 2 (n=258), a prospective design was used to further test the model with nurses from the Province of Quebec over a 6-month period. Results provided support for the model. Specifically, harmonious passion predicted an increase in work satisfaction and a decrease in conflict. Conversely, obsessive passion predicted an increase of conflict. In turn, work satisfaction and conflict predicted decreases and increases in burnout changes that took place over time. The results have important implications for theory and research on passion as well as burnout.

  4. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  5. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  6. Inhibition, Updating Working Memory, and Shifting Predict Reading Disability Symptoms in a Hybrid Model: Project KIDS.

    Science.gov (United States)

    Daucourt, Mia C; Schatschneider, Christopher; Connor, Carol M; Al Otaiba, Stephanie; Hart, Sara A

    2018-01-01

    Recent achievement research suggests that executive function (EF), a set of regulatory processes that control both thought and action necessary for goal-directed behavior, is related to typical and atypical reading performance. This project examines the relation of EF, as measured by its components, Inhibition, Updating Working Memory, and Shifting, with a hybrid model of reading disability (RD). Our sample included 420 children who participated in a broader intervention project when they were in KG-third grade (age M = 6.63 years, SD = 1.04 years, range = 4.79-10.40 years). At the time their EF was assessed, using a parent-report Behavior Rating Inventory of Executive Function (BRIEF), they had a mean age of 13.21 years ( SD = 1.54 years; range = 10.47-16.63 years). The hybrid model of RD was operationalized as a composite consisting of four symptoms, and set so that any child could have any one, any two, any three, any four, or none of the symptoms included in the hybrid model. The four symptoms include low word reading achievement, unexpected low word reading achievement, poorer reading comprehension compared to listening comprehension, and dual-discrepancy response-to-intervention, requiring both low achievement and low growth in word reading. The results of our multilevel ordinal logistic regression analyses showed a significant relation between all three components of EF (Inhibition, Updating Working Memory, and Shifting) and the hybrid model of RD, and that the strength of EF's predictive power for RD classification was the highest when RD was modeled as having at least one or more symptoms. Importantly, the chances of being classified as having RD increased as EF performance worsened and decreased as EF performance improved. The question of whether any one EF component would emerge as a superior predictor was also examined and results showed that Inhibition, Updating Working Memory, and Shifting were equally valuable as predictors of the hybrid model of RD

  7. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  8. A grey NGM(1,1, k) self-memory coupling prediction model for energy consumption prediction.

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.

  9. Predictive modeling of liquid-sodium thermal–hydraulics experiments and computations

    International Nuclear Information System (INIS)

    Arslan, Erkan; Cacuci, Dan G.

    2014-01-01

    Highlights: • We applied the predictive modeling method of Cacuci and Ionescu-Bujor (2010). • We assimilated data from sodium flow experiments. • We used computational fluid dynamics simulations of sodium experiments. • The predictive modeling method greatly reduced uncertainties in predicted results. - Abstract: This work applies the predictive modeling procedure formulated by Cacuci and Ionescu-Bujor (2010) to assimilate data from liquid-sodium thermal–hydraulics experiments in order to reduce systematically the uncertainties in the predictions of computational fluid dynamics (CFD) simulations. The predicted CFD-results for the best-estimate model parameters and results describing sodium-flow velocities and temperature distributions are shown to be significantly more precise than the original computations and experiments, in that the predicted uncertainties for the best-estimate results and model parameters are significantly smaller than both the originally computed and the experimental uncertainties

  10. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...

  11. Predicting absenteeism: screening for work ability or burnout.

    Science.gov (United States)

    Schouteten, R

    2017-01-01

    In determining the predictors of occupational health problems, two factors can be distinguished: personal (work ability) factors and work-related factors (burnout, job characteristics). However, these risk factors are hardly ever combined and it is not clear whether burnout or work ability best predicts absenteeism. To relate measures of work ability, burnout and job characteristics to absenteeism as the indicators of occupational health problems. Survey data on work ability, burnout and job characteristics from a Dutch university were related to the absenteeism data from the university's occupational health and safety database in the year following the survey study. The survey contained the Work Ability Index (WAI), Utrecht Burnout Scale (UBOS) and seven job characteristics from the Questionnaire on Experience and Evaluation of Work (QEEW). There were 242 employees in the study group. Logistic regression analyses revealed that job characteristics did not predict absenteeism. Exceptional absenteeism was most consistently predicted by the WAI dimensions 'employees' own prognosis of work ability in two years from now' and 'mental resources/vitality' and the burnout dimension 'emotional exhaustion'. Other significant predictors of exceptional absenteeism frequency included estimated work impairment due to diseases (WAI) and feelings of depersonalization or emotional distance from the work (burnout). Absenteeism among university personnel was best predicted by a combination of work ability and burnout. As a result, measures to prevent absenteeism and health problems may best be aimed at improving an individual's work ability and/or preventing the occurrence of burnout. © The Author 2016. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Modeling the prediction of business intelligence system effectiveness.

    Science.gov (United States)

    Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I

    2016-01-01

    Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important.

  13. Prediction of resource volumes at untested locations using simple local prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  14. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  15. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  16. Sweat loss prediction using a multi-model approach.

    Science.gov (United States)

    Xu, Xiaojiang; Santee, William R

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  17. Predicting FLDs Using a Multiscale Modeling Scheme

    Science.gov (United States)

    Wu, Z.; Loy, C.; Wang, E.; Hegadekatte, V.

    2017-09-01

    The measurement of a single forming limit diagram (FLD) requires significant resources and is time consuming. We have developed a multiscale modeling scheme to predict FLDs using a combination of limited laboratory testing, crystal plasticity (VPSC) modeling, and dual sequential-stage finite element (ABAQUS/Explicit) modeling with the Marciniak-Kuczynski (M-K) criterion to determine the limit strain. We have established a means to work around existing limitations in ABAQUS/Explicit by using an anisotropic yield locus (e.g., BBC2008) in combination with the M-K criterion. We further apply a VPSC model to reduce the number of laboratory tests required to characterize the anisotropic yield locus. In the present work, we show that the predicted FLD is in excellent agreement with the measured FLD for AA5182 in the O temper. Instead of 13 different tests as for a traditional FLD determination within Novelis, our technique uses just four measurements: tensile properties in three orientations; plane strain tension; biaxial bulge; and the sheet crystallographic texture. The turnaround time is consequently far less than for the traditional laboratory measurement of the FLD.

  18. A Grey NGM(1,1, k) Self-Memory Coupling Prediction Model for Energy Consumption Prediction

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span. PMID:25054174

  19. Inhibition, Updating Working Memory, and Shifting Predict Reading Disability Symptoms in a Hybrid Model: Project KIDS

    Directory of Open Access Journals (Sweden)

    Mia C. Daucourt

    2018-03-01

    Full Text Available Recent achievement research suggests that executive function (EF, a set of regulatory processes that control both thought and action necessary for goal-directed behavior, is related to typical and atypical reading performance. This project examines the relation of EF, as measured by its components, Inhibition, Updating Working Memory, and Shifting, with a hybrid model of reading disability (RD. Our sample included 420 children who participated in a broader intervention project when they were in KG-third grade (age M = 6.63 years, SD = 1.04 years, range = 4.79–10.40 years. At the time their EF was assessed, using a parent-report Behavior Rating Inventory of Executive Function (BRIEF, they had a mean age of 13.21 years (SD = 1.54 years; range = 10.47–16.63 years. The hybrid model of RD was operationalized as a composite consisting of four symptoms, and set so that any child could have any one, any two, any three, any four, or none of the symptoms included in the hybrid model. The four symptoms include low word reading achievement, unexpected low word reading achievement, poorer reading comprehension compared to listening comprehension, and dual-discrepancy response-to-intervention, requiring both low achievement and low growth in word reading. The results of our multilevel ordinal logistic regression analyses showed a significant relation between all three components of EF (Inhibition, Updating Working Memory, and Shifting and the hybrid model of RD, and that the strength of EF’s predictive power for RD classification was the highest when RD was modeled as having at least one or more symptoms. Importantly, the chances of being classified as having RD increased as EF performance worsened and decreased as EF performance improved. The question of whether any one EF component would emerge as a superior predictor was also examined and results showed that Inhibition, Updating Working Memory, and Shifting were equally valuable as predictors of the

  20. Predictive Models of Work-Related Musculoskeletal Disorders (WMSDs Among Sewing Machine Operators in the Garments Industry

    Directory of Open Access Journals (Sweden)

    Carlos Ignacio P. Lugay

    2015-02-01

    Full Text Available The Philippine garments industry has been a driving force in the country’s economy, with apparel manufacturing firms catering to the local and global markets and providing employment opportunities for skilled Filipinos. Tight competition from neighboring Asian countries however, has made the industry’s situation difficult to flourish, especially in the wake of the Association of Southeast Asian Nations (ASEAN 2015 Integration. To assist the industry, this research examined one of the more common problems among sewing machine operators, termed as Work-related Musculoskeletal Disorders (WMSDs. These disorders are reflective in the frequency and severity of the pain experienced by the sewers while accomplishing their tasks. The causes of these disorders were identified and were correlated with the frequency and severity of pain in various body areas of the operator. To forecast pain from WMSDs among the operators, mathematical models were developed to predict the combined frequency and severity of the pain from WMSDs. Loss time or “unofficial breaktimes” due to pain from WMSDs was likewise forecasted to determine its effects on the firm’s production capacity. Both these predictive models were developed in order to assist garment companies in anticipating better the effects of WMSDs and loss time in their operations. Moreover, ergonomic interventions were suggested to minimize pain from WMSDs, with the expectation of increased productivity of the operators and improved quality of their outputs.

  1. Interpreting Disruption Prediction Models to Improve Plasma Control

    Science.gov (United States)

    Parsons, Matthew

    2017-10-01

    In order for the tokamak to be a feasible design for a fusion reactor, it is necessary to minimize damage to the machine caused by plasma disruptions. Accurately predicting disruptions is a critical capability for triggering any mitigative actions, and a modest amount of attention has been given to efforts that employ machine learning techniques to make these predictions. By monitoring diagnostic signals during a discharge, such predictive models look for signs that the plasma is about to disrupt. Typically these predictive models are interpreted simply to give a `yes' or `no' response as to whether a disruption is approaching. However, it is possible to extract further information from these models to indicate which input signals are more strongly correlated with the plasma approaching a disruption. If highly accurate predictive models can be developed, this information could be used in plasma control schemes to make better decisions about disruption avoidance. This work was supported by a Grant from the 2016-2017 Fulbright U.S. Student Program, administered by the Franco-American Fulbright Commission in France.

  2. Working memory predicts children's analogical reasoning.

    Science.gov (United States)

    Simms, Nina K; Frausel, Rebecca R; Richland, Lindsey E

    2018-02-01

    Analogical reasoning is the cognitive skill of drawing relationships between representations, often between prior knowledge and new representations, that allows for bootstrapping cognitive and language development. Analogical reasoning proficiency develops substantially during childhood, although the mechanisms underlying this development have been debated, with developing cognitive resources as one proposed mechanism. We explored the role of executive function (EF) in supporting children's analogical reasoning development, with the goal of determining whether predicted aspects of EF were related to analogical development at the level of individual differences. We assessed 5- to 11-year-old children's working memory, inhibitory control, and cognitive flexibility using measures from the National Institutes of Health Toolbox Cognition battery. Individual differences in children's working memory best predicted performance on an analogical mapping task, even when controlling for age, suggesting a fundamental interrelationship between analogical reasoning and working memory development. These findings underscore the need to consider cognitive capacities in comprehensive theories of children's reasoning development. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Interests, Work Values, and Occupations: Predicting Work Outcomes with the WorkKeys Fit Assessment

    Science.gov (United States)

    Swaney, Kyle B.; Allen, Jeff; Casillas, Alex; Hanson, Mary Ann; Robbins, Steven B.

    2012-01-01

    This study examined whether a measure of person-environment (P-E) fit predicted worker ratings of work attitudes and supervisor ratings of performance. After combining extant data elements and expert ratings of interest and work value characteristics for each occupation in the O*NET system, the authors generated a "Fit Index"--involving profile…

  4. Modeling of Complex Life Cycle Prediction Based on Cell Division

    Directory of Open Access Journals (Sweden)

    Fucheng Zhang

    2017-01-01

    Full Text Available Effective fault diagnosis and reasonable life expectancy are of great significance and practical engineering value for the safety, reliability, and maintenance cost of equipment and working environment. At present, the life prediction methods of the equipment are equipment life prediction based on condition monitoring, combined forecasting model, and driven data. Most of them need to be based on a large amount of data to achieve the problem. For this issue, we propose learning from the mechanism of cell division in the organism. We have established a moderate complexity of life prediction model across studying the complex multifactor correlation life model. In this paper, we model the life prediction of cell division. Experiments show that our model can effectively simulate the state of cell division. Through the model of reference, we will use it for the equipment of the complex life prediction.

  5. A Grey NGM(1,1,k Self-Memory Coupling Prediction Model for Energy Consumption Prediction

    Directory of Open Access Journals (Sweden)

    Xiaojun Guo

    2014-01-01

    Full Text Available Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1,k self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1,k model. The traditional grey model’s weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1,k self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.

  6. Predictive power of theoretical modelling of the nuclear mean field: examples of improving predictive capacities

    Science.gov (United States)

    Dedes, I.; Dudek, J.

    2018-03-01

    We examine the effects of the parametric correlations on the predictive capacities of the theoretical modelling keeping in mind the nuclear structure applications. The main purpose of this work is to illustrate the method of establishing the presence and determining the form of parametric correlations within a model as well as an algorithm of elimination by substitution (see text) of parametric correlations. We examine the effects of the elimination of the parametric correlations on the stabilisation of the model predictions further and further away from the fitting zone. It follows that the choice of the physics case and the selection of the associated model are of secondary importance in this case. Under these circumstances we give priority to the relative simplicity of the underlying mathematical algorithm, provided the model is realistic. Following such criteria, we focus specifically on an important but relatively simple case of doubly magic spherical nuclei. To profit from the algorithmic simplicity we chose working with the phenomenological spherically symmetric Woods–Saxon mean-field. We employ two variants of the underlying Hamiltonian, the traditional one involving both the central and the spin orbit potential in the Woods–Saxon form and the more advanced version with the self-consistent density-dependent spin–orbit interaction. We compare the effects of eliminating of various types of correlations and discuss the improvement of the quality of predictions (‘predictive power’) under realistic parameter adjustment conditions.

  7. Construction Worker Fatigue Prediction Model Based on System Dynamic

    Directory of Open Access Journals (Sweden)

    Wahyu Adi Tri Joko

    2017-01-01

    Full Text Available Construction accident can be caused by internal and external factors such as worker fatigue and unsafe project environment. Tight schedule of construction project forcing construction worker to work overtime in long period. This situation leads to worker fatigue. This paper proposes a model to predict construction worker fatigue based on system dynamic (SD. System dynamic is used to represent correlation among internal and external factors and to simulate level of worker fatigue. To validate the model, 93 construction workers whom worked in a high rise building construction projects, were used as case study. The result shows that excessive workload, working elevation and age, are the main factors lead to construction worker fatigue. Simulation result also shows that these factors can increase worker fatigue level to 21.2% times compared to normal condition. Beside predicting worker fatigue level this model can also be used as early warning system to prevent construction worker accident

  8. [A Structural Equation Model on Family Strength of Married Working Women].

    Science.gov (United States)

    Hong, Yeong Seon; Han, Kuem Sun

    2015-12-01

    The purpose of this study was to identify the effect of predictive factors related to family strength and develop a structural equation model that explains family strength among married working women. A hypothesized model was developed based on literature reviews and predictors of family strength by Yoo. This constructed model was built of an eight pathway form. Two exogenous variables included in this model were ego-resilience and family support. Three endogenous variables included in this model were functional couple communication, family stress and family strength. Data were collected using a self-report questionnaire from 319 married working women who were 30~40 of age and lived in cities of Chungnam province in Korea. Data were analyzed with PASW/WIN 18.0 and AMOS 18.0 programs. Family support had a positive direct, indirect and total effect on family strength. Family stress had a negative direct, indirect and total effect on family strength. Functional couple communication had a positive direct and total effect on family strength. These predictive variables of family strength explained 61.8% of model. The results of the study show a structural equation model for family strength of married working women and that predicting factors for family strength are family support, family stress, and functional couple communication. To improve family strength of married working women, the results of this study suggest nursing access and mediative programs to improve family support and functional couple communication, and reduce family stress.

  9. Work and Non-Work Physical Activity Predict Real-Time Smoking Level and Urges in Young Adults.

    Science.gov (United States)

    Nadell, Melanie J; Mermelstein, Robin J; Hedeker, Donald; Marquez, David X

    2015-07-01

    Physical activity (PA) and smoking are inversely related. However, evidence suggests that some types of PA, namely work-related PA, may show an opposite effect. Despite growing knowledge, there remains a paucity of studies examining the context of these behaviors in naturalistic settings or in young adults, a high-risk group for escalation. Participants were 188 young adults (mean age = 21.32; 53.2% female; 91% current smokers) who participated in an electronic diary week to assess daily smoking and urges and a PA recall to examine daily PA. PA was coded into non-work-related and work-related activity to examine differential effects. We considered both participants' weekly average PA and their daily deviations from their average. Mixed-effects regression models revealed that higher weekly average non-work PA was associated with lower smoking level and urges. Daily deviations in non-work PA did not predict urges; however, increased daily non-work PA relative to participants' weekly average was associated with lower smoking for females but higher levels for males. Regarding work PA, only higher weekly average work PA was associated with higher smoking level for both genders; work PA did not predict urges. Results extend previous literature by documenting differential associations between non-work and work PA and young adult smoking and suggest that young adults engaged in work PA should be considered a high-risk group for escalation. Findings provide theoretical and clinical implications for the use of PA in intervention and highlight the necessity of considering PA as a multidimensional construct when examining its links to health behavior. © The Author 2014. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Model Predictive Control of Mineral Column Flotation Process

    Directory of Open Access Journals (Sweden)

    Yahui Tian

    2018-06-01

    Full Text Available Column flotation is an efficient method commonly used in the mineral industry to separate useful minerals from ores of low grade and complex mineral composition. Its main purpose is to achieve maximum recovery while ensuring desired product grade. This work addresses a model predictive control design for a mineral column flotation process modeled by a set of nonlinear coupled heterodirectional hyperbolic partial differential equations (PDEs and ordinary differential equations (ODEs, which accounts for the interconnection of well-stirred regions represented by continuous stirred tank reactors (CSTRs and transport systems given by heterodirectional hyperbolic PDEs, with these two regions combined through the PDEs’ boundaries. The model predictive control considers both optimality of the process operations and naturally present input and state/output constraints. For the discrete controller design, spatially varying steady-state profiles are obtained by linearizing the coupled ODE–PDE model, and then the discrete system is obtained by using the Cayley–Tustin time discretization transformation without any spatial discretization and/or without model reduction. The model predictive controller is designed by solving an optimization problem with input and state/output constraints as well as input disturbance to minimize the objective function, which leads to an online-solvable finite constrained quadratic regulator problem. Finally, the controller performance to keep the output at the steady state within the constraint range is demonstrated by simulation studies, and it is concluded that the optimal control scheme presented in this work makes this flotation process more efficient.

  11. Determinants of work ability and its predictive value for disability.

    Science.gov (United States)

    Alavinia, S M; de Boer, A G E M; van Duivenbooden, J C; Frings-Dresen, M H W; Burdorf, A

    2009-01-01

    Maintaining the ability of workers to cope with physical and psychosocial demands at work becomes increasingly important in prolonging working life. To analyse the effects of work-related factors and individual characteristics on work ability and to determine the predictive value of work ability on receiving a work-related disability pension. A longitudinal study was conducted among 850 construction workers aged 40 years and older, with average follow-up period of 23 months. Disability was defined as receiving a disability pension, granted to workers unable to continue working in their regular job. Work ability was assessed using the work ability index (WAI). Associations between work-related factors and individual characteristics with work ability at baseline were evaluated using linear regression analysis, and Cox regression analysis was used to evaluate the predictive value of work ability for disability. Work-related factors were associated with a lower work ability at baseline, but had little prognostic value for disability during follow-up. The hazard ratios for disability among workers with a moderate and poor work ability at baseline were 8 and 32, respectively. All separate scales in the WAI had predictive power for future disability with the highest influence of current work ability in relation to job demands and lowest influence of diseases diagnosed by a physician. A moderate or poor work ability was highly predictive for receiving a disability pension. Preventive measures should facilitate a good balance between work performance and health in order to prevent quitting labour participation.

  12. Working Memory and Auditory Imagery Predict Sensorimotor Synchronization with Expressively Timed Music.

    Science.gov (United States)

    Colley, Ian D; Keller, Peter E; Halpern, Andrea R

    2017-08-11

    Sensorimotor synchronization (SMS) is prevalent and readily studied in musical settings, as most people are able to perceive and synchronize with a beat (e.g. by finger tapping). We took an individual differences approach to understanding SMS to real music characterized by expressive timing (i.e. fluctuating beat regularity). Given the dynamic nature of SMS, we hypothesized that individual differences in working memory and auditory imagery-both fluid cognitive processes-would predict SMS at two levels: 1) mean absolute asynchrony (a measure of synchronization error), and 2) anticipatory timing (i.e. predicting, rather than reacting to beat intervals). In Experiment 1, participants completed two working memory tasks, four auditory imagery tasks, and an SMS-tapping task. Hierarchical regression models were used to predict SMS performance, with results showing dissociations among imagery types in relation to mean absolute asynchrony, and evidence of a role for working memory in anticipatory timing. In Experiment 2, a new sample of participants completed an expressive timing perception task to examine the role of imagery in perception without action. Results suggest that imagery vividness is important for perceiving and control is important for synchronizing with, irregular but ecologically valid musical time series. Working memory is implicated in synchronizing by anticipating events in the series.

  13. The Job Demands-Resources model as predictor of work identity and work engagement: A comparative analysis

    OpenAIRE

    Roslyn De Braine; Gert Roodt

    2011-01-01

    Orientation: Research shows that engaged employees experience high levels of energy and strong identification with their work, hence this study’s focus on work identity and dedication. Research purpose: This study explored possible differences in the Job Demands-Resources model (JD-R) as predictor of overall work engagement, dedication only and work-based identity, through comparative predictive analyses. Motivation for the study: This study may shed light on the dedication component o...

  14. Standardizing the performance evaluation of short-term wind prediction models

    DEFF Research Database (Denmark)

    Madsen, Henrik; Pinson, Pierre; Kariniotakis, G.

    2005-01-01

    Short-term wind power prediction is a primary requirement for efficient large-scale integration of wind generation in power systems and electricity markets. The choice of an appropriate prediction model among the numerous available models is not trivial, and has to be based on an objective...... evaluation of model performance. This paper proposes a standardized protocol for the evaluation of short-term wind-poser preciction systems. A number of reference prediction models are also described, and their use for performance comparison is analysed. The use of the protocol is demonstrated using results...... from both on-shore and off-shore wind forms. The work was developed in the frame of the Anemos project (EU R&D project) where the protocol has been used to evaluate more than 10 prediction systems....

  15. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  16. Workplace Bullying and Work Engagement: A Self-Determination Model.

    Science.gov (United States)

    Goodboy, Alan K; Martin, Matthew M; Bolkan, San

    2017-06-01

    This study modeled motivational mechanisms that explain the negative effects of workplace bullying on work engagement. Guided by self-determination theory, workplace bullying was predicted to decrease worker engagement indirectly, due to the denial of employees' basic psychological needs and their intrinsic motivation to work. From a sample of 243 full-time employees, serial multiple mediation models revealed that the indirect relationships between workplace bullying and work engagement (i.e., vigor, dedication, absorption) were serially mediated by basic psychological needs and intrinsic motivation to work. In support of self-determination theory, this study revealed that workplace bullying indirectly disengages employees from their work by denying them of their autonomy and relatedness needs and thwarting their motivation to perform work in a fulfilling way.

  17. Developing predictive models for return to work using the Military Power, Performance and Prevention (MP3) musculoskeletal injury risk algorithm: a study protocol for an injury risk assessment programme.

    Science.gov (United States)

    Rhon, Daniel I; Teyhen, Deydre S; Shaffer, Scott W; Goffar, Stephen L; Kiesel, Kyle; Plisky, Phil P

    2018-02-01

    Musculoskeletal injuries are a primary source of disability in the US Military, and low back pain and lower extremity injuries account for over 44% of limited work days annually. History of prior musculoskeletal injury increases the risk for future injury. This study aims to determine the risk of injury after returning to work from a previous injury. The objective is to identify criteria that can help predict likelihood for future injury or re-injury. There will be 480 active duty soldiers recruited from across four medical centres. These will be patients who have sustained a musculoskeletal injury in the lower extremity or lumbar/thoracic spine, and have now been cleared to return back to work without any limitations. Subjects will undergo a battery of physical performance tests and fill out sociodemographic surveys. They will be followed for a year to identify any musculoskeletal injuries that occur. Prediction algorithms will be derived using regression analysis from performance and sociodemographic variables found to be significantly different between injured and non-injured subjects. Due to the high rates of injuries, injury prevention and prediction initiatives are growing. This is the first study looking at predicting re-injury rates after an initial musculoskeletal injury. In addition, multivariate prediction models appear to have move value than models based on only one variable. This approach aims to validate a multivariate model used in healthy non-injured individuals to help improve variables that best predict the ability to return to work with lower risk of injury, after a recent musculoskeletal injury. NCT02776930. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  18. A new crack growth model for life prediction under random loading

    International Nuclear Information System (INIS)

    Lee, Ouk Sub; Chen, Zhi Wei

    1999-01-01

    The load interaction effect in variable amplitude fatigue test is a very important issue for correctly predicting fatigue life. Some prediction methods for retardation are reviewed and the problems discussed. The so-called 'under-load' effect is also of importance for a prediction model to work properly under random load spectrum. A new model that is simple in form but combines overload plastic zone and residual stress considerations together with Elber's closure concept is proposed to fully take account of the load-interaction effects including both over-load and under-load effects. Applying this new model to complex load sequence is explored here. Simulations of tests show the improvement of the new model over other models. The best prediction (mostly closely resembling test curve) is given by the newly proposed Chen-Lee model

  19. An interference model of visual working memory.

    Science.gov (United States)

    Oberauer, Klaus; Lin, Hsuan-Yu

    2017-01-01

    The article introduces an interference model of working memory for information in a continuous similarity space, such as the features of visual objects. The model incorporates the following assumptions: (a) Probability of retrieval is determined by the relative activation of each retrieval candidate at the time of retrieval; (b) activation comes from 3 sources in memory: cue-based retrieval using context cues, context-independent memory for relevant contents, and noise; (c) 1 memory object and its context can be held in the focus of attention, where it is represented with higher precision, and partly shielded against interference. The model was fit to data from 4 continuous-reproduction experiments testing working memory for colors or orientations. The experiments involved variations of set size, kind of context cues, precueing, and retro-cueing of the to-be-tested item. The interference model fit the data better than 2 competing models, the Slot-Averaging model and the Variable-Precision resource model. The interference model also fared well in comparison to several new models incorporating alternative theoretical assumptions. The experiments confirm 3 novel predictions of the interference model: (a) Nontargets intrude in recall to the extent that they are close to the target in context space; (b) similarity between target and nontarget features improves recall, and (c) precueing-but not retro-cueing-the target substantially reduces the set-size effect. The success of the interference model shows that working memory for continuous visual information works according to the same principles as working memory for more discrete (e.g., verbal) contents. Data and model codes are available at https://osf.io/wgqd5/. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Ionosphere monitoring and forecast activities within the IAG working group "Ionosphere Prediction"

    Science.gov (United States)

    Hoque, Mainul; Garcia-Rigo, Alberto; Erdogan, Eren; Cueto Santamaría, Marta; Jakowski, Norbert; Berdermann, Jens; Hernandez-Pajares, Manuel; Schmidt, Michael; Wilken, Volker

    2017-04-01

    Ionospheric disturbances can affect technologies in space and on Earth disrupting satellite and airline operations, communications networks, navigation systems. As the world becomes ever more dependent on these technologies, ionospheric disturbances as part of space weather pose an increasing risk to the economic vitality and national security. Therefore, having the knowledge of ionospheric state in advance during space weather events is becoming more and more important. To promote scientific cooperation we recently formed a Working Group (WG) called "Ionosphere Predictions" within the International Association of Geodesy (IAG) under Sub-Commission 4.3 "Atmosphere Remote Sensing" of the Commission 4 "Positioning and Applications". The general objective of the WG is to promote the development of ionosphere prediction algorithm/models based on the dependence of ionospheric characteristics on solar and magnetic conditions combining data from different sensors to improve the spatial and temporal resolution and sensitivity taking advantage of different sounding geometries and latency. Our presented work enables the possibility to compare total electron content (TEC) prediction approaches/results from different centers contributing to this WG such as German Aerospace Center (DLR), Universitat Politècnica de Catalunya (UPC), Technische Universität München (TUM) and GMV. DLR developed a model-assisted TEC forecast algorithm taking benefit from actual trends of the TEC behavior at each grid point. Since during perturbations, characterized by large TEC fluctuations or ionization fronts, this approach may fail, the trend information is merged with the current background model which provides a stable climatological TEC behavior. The presented solution is a first step to regularly provide forecasted TEC services via SWACI/IMPC by DLR. UPC forecast model is based on applying linear regression to a temporal window of TEC maps in the Discrete Cosine Transform (DCT) domain

  1. Robust Model Predictive Control of a Wind Turbine

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2012-01-01

    In this work the problem of robust model predictive control (robust MPC) of a wind turbine in the full load region is considered. A minimax robust MPC approach is used to tackle the problem. Nonlinear dynamics of the wind turbine are derived by combining blade element momentum (BEM) theory...... of the uncertain system is employed and a norm-bounded uncertainty model is used to formulate a minimax model predictive control. The resulting optimization problem is simplified by semidefinite relaxation and the controller obtained is applied on a full complexity, high fidelity wind turbine model. Finally...... and first principle modeling of the turbine flexible structure. Thereafter the nonlinear model is linearized using Taylor series expansion around system operating points. Operating points are determined by effective wind speed and an extended Kalman filter (EKF) is employed to estimate this. In addition...

  2. In silico modeling to predict drug-induced phospholipidosis

    International Nuclear Information System (INIS)

    Choi, Sydney S.; Kim, Jae S.; Valerio, Luis G.; Sadrieh, Nakissa

    2013-01-01

    Drug-induced phospholipidosis (DIPL) is a preclinical finding during pharmaceutical drug development that has implications on the course of drug development and regulatory safety review. A principal characteristic of drugs inducing DIPL is known to be a cationic amphiphilic structure. This provides evidence for a structure-based explanation and opportunity to analyze properties and structures of drugs with the histopathologic findings for DIPL. In previous work from the FDA, in silico quantitative structure–activity relationship (QSAR) modeling using machine learning approaches has shown promise with a large dataset of drugs but included unconfirmed data as well. In this study, we report the construction and validation of a battery of complementary in silico QSAR models using the FDA's updated database on phospholipidosis, new algorithms and predictive technologies, and in particular, we address high performance with a high-confidence dataset. The results of our modeling for DIPL include rigorous external validation tests showing 80–81% concordance. Furthermore, the predictive performance characteristics include models with high sensitivity and specificity, in most cases above ≥ 80% leading to desired high negative and positive predictivity. These models are intended to be utilized for regulatory toxicology applied science needs in screening new drugs for DIPL. - Highlights: • New in silico models for predicting drug-induced phospholipidosis (DIPL) are described. • The training set data in the models is derived from the FDA's phospholipidosis database. • We find excellent predictivity values of the models based on external validation. • The models can support drug screening and regulatory decision-making on DIPL

  3. Model-free prediction and regression a transformation-based approach to inference

    CERN Document Server

    Politis, Dimitris N

    2015-01-01

    The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, co...

  4. Predictive modelling using neuroimaging data in the presence of confounds.

    Science.gov (United States)

    Rao, Anil; Monteiro, Joao M; Mourao-Miranda, Janaina

    2017-04-15

    When training predictive models from neuroimaging data, we typically have available non-imaging variables such as age and gender that affect the imaging data but which we may be uninterested in from a clinical perspective. Such variables are commonly referred to as 'confounds'. In this work, we firstly give a working definition for confound in the context of training predictive models from samples of neuroimaging data. We define a confound as a variable which affects the imaging data and has an association with the target variable in the sample that differs from that in the population-of-interest, i.e., the population over which we intend to apply the estimated predictive model. The focus of this paper is the scenario in which the confound and target variable are independent in the population-of-interest, but the training sample is biased due to a sample association between the target and confound. We then discuss standard approaches for dealing with confounds in predictive modelling such as image adjustment and including the confound as a predictor, before deriving and motivating an Instance Weighting scheme that attempts to account for confounds by focusing model training so that it is optimal for the population-of-interest. We evaluate the standard approaches and Instance Weighting in two regression problems with neuroimaging data in which we train models in the presence of confounding, and predict samples that are representative of the population-of-interest. For comparison, these models are also evaluated when there is no confounding present. In the first experiment we predict the MMSE score using structural MRI from the ADNI database with gender as the confound, while in the second we predict age using structural MRI from the IXI database with acquisition site as the confound. Considered over both datasets we find that none of the methods for dealing with confounding gives more accurate predictions than a baseline model which ignores confounding, although

  5. Stochastic models for predicting pitting corrosion damage of HLRW containers

    International Nuclear Information System (INIS)

    Henshall, G.A.

    1991-10-01

    Stochastic models for predicting aqueous pitting corrosion damage of high-level radioactive-waste containers are described. These models could be used to predict the time required for the first pit to penetrate a container and the increase in the number of breaches at later times, both of which would be useful in the repository system performance analysis. Monte Carlo implementations of the stochastic models are described, and predictions of induction time, survival probability and pit depth distributions are presented. These results suggest that the pit nucleation probability decreases with exposure time and that pit growth may be a stochastic process. The advantages and disadvantages of the stochastic approach, methods for modeling the effects of environment, and plans for future work are discussed

  6. Predicting on-site environmental impacts of municipal engineering works

    International Nuclear Information System (INIS)

    Gangolells, Marta; Casals, Miquel; Forcada, Núria; Macarulla, Marcel

    2014-01-01

    The research findings fill a gap in the body of knowledge by presenting an effective way to evaluate the significance of on-site environmental impacts of municipal engineering works prior to the construction stage. First, 42 on-site environmental impacts of municipal engineering works were identified by means of a process-oriented approach. Then, 46 indicators and their corresponding significance limits were determined on the basis of a statistical analysis of 25 new-build and remodelling municipal engineering projects. In order to ensure the objectivity of the assessment process, direct and indirect indicators were always based on quantitative data from the municipal engineering project documents. Finally, two case studies were analysed and found to illustrate the practical use of the proposed model. The model highlights the significant environmental impacts of a particular municipal engineering project prior to the construction stage. Consequently, preventive actions can be planned and implemented during on-site activities. The results of the model also allow a comparison of proposed municipal engineering projects and alternatives with respect to the overall on-site environmental impact and the absolute importance of a particular environmental aspect. These findings are useful within the framework of the environmental impact assessment process, as they help to improve the identification and evaluation of on-site environmental aspects of municipal engineering works. The findings may also be of use to construction companies that are willing to implement an environmental management system or simply wish to improve on-site environmental performance in municipal engineering projects. -- Highlights: • We present a model to predict the environmental impacts of municipal engineering works. • It highlights significant on-site environmental impacts prior to the construction stage. • Findings are useful within the environmental impact assessment process. • They also

  7. Predicting on-site environmental impacts of municipal engineering works

    Energy Technology Data Exchange (ETDEWEB)

    Gangolells, Marta, E-mail: marta.gangolells@upc.edu; Casals, Miquel, E-mail: miquel.casals@upc.edu; Forcada, Núria, E-mail: nuria.forcada@upc.edu; Macarulla, Marcel, E-mail: marcel.macarulla@upc.edu

    2014-01-15

    The research findings fill a gap in the body of knowledge by presenting an effective way to evaluate the significance of on-site environmental impacts of municipal engineering works prior to the construction stage. First, 42 on-site environmental impacts of municipal engineering works were identified by means of a process-oriented approach. Then, 46 indicators and their corresponding significance limits were determined on the basis of a statistical analysis of 25 new-build and remodelling municipal engineering projects. In order to ensure the objectivity of the assessment process, direct and indirect indicators were always based on quantitative data from the municipal engineering project documents. Finally, two case studies were analysed and found to illustrate the practical use of the proposed model. The model highlights the significant environmental impacts of a particular municipal engineering project prior to the construction stage. Consequently, preventive actions can be planned and implemented during on-site activities. The results of the model also allow a comparison of proposed municipal engineering projects and alternatives with respect to the overall on-site environmental impact and the absolute importance of a particular environmental aspect. These findings are useful within the framework of the environmental impact assessment process, as they help to improve the identification and evaluation of on-site environmental aspects of municipal engineering works. The findings may also be of use to construction companies that are willing to implement an environmental management system or simply wish to improve on-site environmental performance in municipal engineering projects. -- Highlights: • We present a model to predict the environmental impacts of municipal engineering works. • It highlights significant on-site environmental impacts prior to the construction stage. • Findings are useful within the environmental impact assessment process. • They also

  8. Predicting flow at work: investigating the activities and job characteristics that predict flow states at work.

    Science.gov (United States)

    Nielsen, Karina; Cleal, Bryan

    2010-04-01

    Flow (a state of consciousness where people become totally immersed in an activity and enjoy it intensely) has been identified as a desirable state with positive effects for employee well-being and innovation at work. Flow has been studied using both questionnaires and Experience Sampling Method (ESM). In this study, we used a newly developed 9-item flow scale in an ESM study combined with a questionnaire to examine the predictors of flow at two levels: the activities (brainstorming, planning, problem solving and evaluation) associated with transient flow states and the more stable job characteristics (role clarity, influence and cognitive demands). Participants were 58 line managers from two companies in Denmark; a private accountancy firm and a public elder care organization. We found that line managers in elder care experienced flow more often than accountancy line managers, and activities such as planning, problem solving, and evaluation predicted transient flow states. The more stable job characteristics included in this study were not, however, found to predict flow at work. Copyright 2010 APA, all rights reserved.

  9. Development of a prototype system for prediction of the group error at the maintenance work

    International Nuclear Information System (INIS)

    Yoshino, Kenji; Hirotsu, Yuuko

    2001-01-01

    This paper described on development and performance evaluation of a prototype system for prediction of the group error at the maintenance work. The results so far are as follows. (1) When a user inputs the existence and the grade of the feature factor of the maintenance work as a prediction object, an organization and an organization factor and a group PSF put into the system. The maintenance group error to target can be predicted through the prediction model which consists of a class of seven stages. (2) This system by utilizing the information on a prediction result database, it can be use not only for prediction of a maintenance group but for various safe Activity, such as KYT(Kiken Yochi Training) and TBM(Tool Box Meeting). (3) This system predicts a cooperation error at highest rate, and subsequently. Predicts the detection error at a high rate. and to the decision-making. Error, the transfer error and the state cognitive error, and state error, it has the characteristics predicted at almost same rate. (4) if it has full knowledge even if the feature, such as the enforcement conditions of maintenance work, and organization, even if the user has neither the knowledge about a human factor, users experience, anyone of this system is slight about the extent, generating of a maintenance group error made difficult from the former logically and systematically, it can predict with business time for about 15 minutes. (author)

  10. Numerical Modelling and Prediction of Erosion Induced by Hydrodynamic Cavitation

    Science.gov (United States)

    Peters, A.; Lantermann, U.; el Moctar, O.

    2015-12-01

    The present work aims to predict cavitation erosion using a numerical flow solver together with a new developed erosion model. The erosion model is based on the hypothesis that collapses of single cavitation bubbles near solid boundaries form high velocity microjets, which cause sonic impacts with high pressure amplitudes damaging the surface. The erosion model uses information from a numerical Euler-Euler flow simulation to predict erosion sensitive areas and assess the erosion aggressiveness of the flow. The obtained numerical results were compared to experimental results from tests of an axisymmetric nozzle.

  11. Robust Output Model Predictive Control of an Unstable Rijke Tube

    Directory of Open Access Journals (Sweden)

    Fabian Jarmolowitz

    2012-01-01

    Full Text Available This work investigates the active control of an unstable Rijke tube using robust output model predictive control (RMPC. As internal model a polytopic linear system with constraints is assumed to account for uncertainties. For guaranteed stability, a linear state feedback controller is designed using linear matrix inequalities and used within a feedback formulation of the model predictive controller. For state estimation a robust gain-scheduled observer is developed. It is shown that the proposed RMPC ensures robust stability under constraints over the considered operating range.

  12. A new, accurate predictive model for incident hypertension.

    Science.gov (United States)

    Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K

    2013-11-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures. The primary study population consisted of 1605 normotensive individuals aged 20-79 years with 5-year follow-up from the population-based study, that is the Study of Health in Pomerania (SHIP). The initial set was randomly split into a training and a testing set. We used a probabilistic graphical model applying a Bayesian network to create a predictive model for incident hypertension and compared the predictive performance with the established Framingham risk score for hypertension. Finally, the model was validated in 2887 participants from INTER99, a Danish community-based intervention study. In the training set of SHIP data, the Bayesian network used a small subset of relevant baseline features including age, mean arterial pressure, rs16998073, serum glucose and urinary albumin concentrations. Furthermore, we detected relevant interactions between age and serum glucose as well as between rs16998073 and urinary albumin concentrations [area under the receiver operating characteristic (AUC 0.76)]. The model was confirmed in the SHIP validation set (AUC 0.78) and externally replicated in INTER99 (AUC 0.77). Compared to the established Framingham risk score for hypertension, the predictive performance of the new model was similar in the SHIP validation set and moderately better in INTER99. Data mining procedures identified a predictive model for incident hypertension, which included innovative and easy-to-measure variables. The findings promise great applicability in screening settings and clinical practice.

  13. How personal resources predict work engagement and self-rated performance among construction workers: a social cognitive perspective.

    Science.gov (United States)

    Lorente, Laura; Salanova, Marisa; Martínez, Isabel M; Vera, María

    2014-06-01

    Traditionally, research focussing on psychosocial factors in the construction industry has focused mainly on the negative aspects of health and on results such as occupational accidents. This study, however, focuses on the specific relationships among the different positive psychosocial factors shared by construction workers that could be responsible for occupational well-being and outcomes such as performance. The main objective of this study was to test whether personal resources predict self-rated job performance through job resources and work engagement. Following the predictions of Bandura's Social Cognitive Theory and the motivational process of the Job Demands-Resources Model, we expect that the relationship between personal resources and performance will be fully mediated by job resources and work engagement. The sample consists of 228 construction workers. Structural equation modelling supports the research model. Personal resources (i.e. self-efficacy, mental and emotional competences) play a predicting role in the perception of job resources (i.e. job control and supervisor social support), which in turn leads to work engagement and self-rated performance. This study emphasises the crucial role that personal resources play in determining how people perceive job resources by determining the levels of work engagement and, hence, their self-rated job performance. Theoretical and practical implications are discussed. © 2014 International Union of Psychological Science.

  14. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  15. MDOT Pavement Management System : Prediction Models and Feedback System

    Science.gov (United States)

    2000-10-01

    As a primary component of a Pavement Management System (PMS), prediction models are crucial for one or more of the following analyses: : maintenance planning, budgeting, life-cycle analysis, multi-year optimization of maintenance works program, and a...

  16. Predictive modeling of coupled multi-physics systems: II. Illustrative application to reactor physics

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel; Badea, Madalina Corina

    2014-01-01

    Highlights: • We applied the PMCMPS methodology to a paradigm neutron diffusion model. • We underscore the main steps in applying PMCMPS to treat very large coupled systems. • PMCMPS reduces the uncertainties in the optimally predicted responses and model parameters. • PMCMPS is for sequentially treating coupled systems that cannot be treated simultaneously. - Abstract: This work presents paradigm applications to reactor physics of the innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS)” developed by Cacuci (2014). This methodology enables the assimilation of experimental and computational information and computes optimally predicted responses and model parameters with reduced predicted uncertainties, taking fully into account the coupling terms between the multi-physics systems, but using only the computational resources that would be needed to perform predictive modeling on each system separately. The paradigm examples presented in this work are based on a simple neutron diffusion model, chosen so as to enable closed-form solutions with clear physical interpretations. These paradigm examples also illustrate the computational efficiency of the PMCMPS, which enables the assimilation of additional experimental information, with a minimal increase in computational resources, to reduce the uncertainties in predicted responses and best-estimate values for uncertain model parameters, thus illustrating how very large systems can be treated without loss of information in a sequential rather than simultaneous manner

  17. Predictive value of the DASH tool for predicting return to work of injured workers with musculoskeletal disorders of the upper extremity.

    Science.gov (United States)

    Armijo-Olivo, Susan; Woodhouse, Linda J; Steenstra, Ivan A; Gross, Douglas P

    2016-12-01

    To determine whether the Disabilities of the Arm, Shoulder, and Hand (DASH) tool added to the predictive ability of established prognostic factors, including patient demographic and clinical outcomes, to predict return to work (RTW) in injured workers with musculoskeletal (MSK) disorders of the upper extremity. A retrospective cohort study using a population-based database from the Workers' Compensation Board of Alberta (WCB-Alberta) that focused on claimants with upper extremity injuries was used. Besides the DASH, potential predictors included demographic, occupational, clinical and health usage variables. Outcome was receipt of compensation benefits after 3 months. To identify RTW predictors, a purposeful logistic modelling strategy was used. A series of receiver operating curve analyses were performed to determine which model provided the best discriminative ability. The sample included 3036 claimants with upper extremity injuries. The final model for predicting RTW included the total DASH score in addition to other established predictors. The area under the curve for this model was 0.77, which is interpreted as fair discrimination. This model was statistically significantly different than the model of established predictors alone (pmodels (p=0.34). The DASH tool together with other established predictors significantly helped predict RTW after 3 months in participants with upper extremity MSK disorders. An appealing result for clinicians and busy researchers is that DASH item 23 has equal predictive ability to the total DASH score. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  18. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  19. Predicting Forearm Physical Exposures During Computer Work Using Self-Reports, Software-Recorded Computer Usage Patterns, and Anthropometric and Workstation Measurements.

    Science.gov (United States)

    Huysmans, Maaike A; Eijckelhof, Belinda H W; Garza, Jennifer L Bruno; Coenen, Pieter; Blatter, Birgitte M; Johnson, Peter W; van Dieën, Jaap H; van der Beek, Allard J; Dennerlein, Jack T

    2017-12-15

    Alternative techniques to assess physical exposures, such as prediction models, could facilitate more efficient epidemiological assessments in future large cohort studies examining physical exposures in relation to work-related musculoskeletal symptoms. The aim of this study was to evaluate two types of models that predict arm-wrist-hand physical exposures (i.e. muscle activity, wrist postures and kinematics, and keyboard and mouse forces) during computer use, which only differed with respect to the candidate predicting variables; (i) a full set of predicting variables, including self-reported factors, software-recorded computer usage patterns, and worksite measurements of anthropometrics and workstation set-up (full models); and (ii) a practical set of predicting variables, only including the self-reported factors and software-recorded computer usage patterns, that are relatively easy to assess (practical models). Prediction models were build using data from a field study among 117 office workers who were symptom-free at the time of measurement. Arm-wrist-hand physical exposures were measured for approximately two hours while workers performed their own computer work. Each worker's anthropometry and workstation set-up were measured by an experimenter, computer usage patterns were recorded using software and self-reported factors (including individual factors, job characteristics, computer work behaviours, psychosocial factors, workstation set-up characteristics, and leisure-time activities) were collected by an online questionnaire. We determined the predictive quality of the models in terms of R2 and root mean squared (RMS) values and exposure classification agreement to low-, medium-, and high-exposure categories (in the practical model only). The full models had R2 values that ranged from 0.16 to 0.80, whereas for the practical models values ranged from 0.05 to 0.43. Interquartile ranges were not that different for the two models, indicating that only for some

  20. Coupling between cracking and permeability, a model for structure service life prediction

    International Nuclear Information System (INIS)

    Lasne, M.; Gerard, B.; Breysse, D.

    1993-01-01

    Many authors have chosen permeability coefficients (permeation, diffusion) as a reference for material durability and for structure service life prediction. When we look for designing engineered barriers for radioactive waste storage we find these macroscopic parameters very essential. In order to work with a predictive model of transfer properties evolution in a porous media (concrete, mortar, rock) we introduce a 'micro-macro' hierarchical model of permeability whose data are the total porosity and the pore size distribution. In spite of the simplicity of the model (very small CPU time consuming) comparative studies show predictive results for sound cement pastes, mortars and concretes. Associated to these works we apply a model of damage due to hydration processes at early ages to a container as a preliminary underproject for the definitive storage of Low Level radioactive Waste (LLW). Data are geometry, cement properties and damage measurement of concrete. This model takes into account the mechanical property of the concrete maturation (volumic variations during cement hydration can damage the structures). Some local microcracking can appear and affect the long term durability. Following these works we introduce our research program for the concrete cracking analysis. An experimental campaign is designed in order to determine damage-cracking-porosity-permeability coupling. (authors). 12 figs., 16 refs

  1. Multiband Prediction Model for Financial Time Series with Multivariate Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2012-01-01

    Full Text Available This paper presents a subband approach to financial time series prediction. Multivariate empirical mode decomposition (MEMD is employed here for multiband representation of multichannel financial time series together. Autoregressive moving average (ARMA model is used in prediction of individual subband of any time series data. Then all the predicted subband signals are summed up to obtain the overall prediction. The ARMA model works better for stationary signal. With multiband representation, each subband becomes a band-limited (narrow band signal and hence better prediction is achieved. The performance of the proposed MEMD-ARMA model is compared with classical EMD, discrete wavelet transform (DWT, and with full band ARMA model in terms of signal-to-noise ratio (SNR and mean square error (MSE between the original and predicted time series. The simulation results show that the MEMD-ARMA-based method performs better than the other methods.

  2. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  3. Delayed Recall and Working Memory MMSE Domains Predict Delirium following Cardiac Surgery.

    Science.gov (United States)

    Price, Catherine C; Garvan, Cynthia; Hizel, Loren P; Lopez, Marcos G; Billings, Frederic T

    2017-01-01

    Reduced preoperative cognition is a risk factor for postoperative delirium. The significance for type of preoperative cognitive deficit, however, has yet to be explored and could provide important insights into mechanisms and prediction of delirium. Our goal was to determine if certain cognitive domains from the general cognitive screener, the Mini-Mental State Exam (MMSE), predict delirium after cardiac surgery. Patients completed a preoperative MMSE prior to undergoing elective cardiac surgery. Following surgery, delirium was assessed throughout ICU stay using the Confusion Assessment Method for ICU delirium and the Richmond Agitation and Sedation Scale. Cardiac surgery patients who developed delirium (n = 137) had lower total MMSE scores than patients who did not develop delirium (n = 457). In particular, orientation to place, working memory, delayed recall, and language domain scores were lower. Of these, only the working memory and delayed recall domains predicted delirium in a regression model adjusting for history of chronic obstructive pulmonary disease, age, sex, and duration of cardiopulmonary bypass. For each word not recalled on the three-word delayed recall assessment, the odds of delirium increased by 50%. For each item missed on the working memory index, the odds of delirium increased by 36%. Of the patients who developed delirium, 47% had a primary impairment in memory, 21% in working memory, and 33% in both domains. The area under the receiver operating characteristics curve using only the working memory and delayed recall domains was 0.75, compared to 0.76 for total MMSE score. Delirium risk is greater for individuals with reduced MMSE scores on the delayed recall and working memory domains. Research should address why patients with memory and executive vulnerabilities are more prone to postoperative delirium than those with other cognitive limitations.

  4. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  5. The Prediction of Consumer Buying Intentions: A Comparative Study of the Predictive Efficacy of Two Attitudinal Models. Faculty Working Paper No. 234.

    Science.gov (United States)

    Bhagat, Rabi S.; And Others

    The role of attitudes in the conduct of buyer behavior is examined in the context of two competitive models of attitude structure and attitude-behavior relationship. Specifically, the objectives of the study were to compare the Fishbein and Sheth models on the criteria of predictive as well as cross validities. Data on both the models were…

  6. Development and performance evaluation of a prototype system for prediction of the group error at the maintenance work

    International Nuclear Information System (INIS)

    Yoshino, Kenji; Hirotsu, Yuko

    2000-01-01

    In order to attain zero-izing of much more error rather than it can set to a nuclear power plant, Authors development and its system-izing of the error prediction causal model which predicts group error action at the time of maintenance work were performed. This prototype system has the following feature. (1) When a user inputs the existence and the grade of the existence of the 'feature factor of the maintenance work' as a prediction object, 'an organization and an organization factor', and a 'group PSF (Performance Shaping Factor) factor' into this system. The maintenance group error to target can be predicted through the prediction model which consists of a class of seven stages. (2) This system by utilizing the information on a prediction result database, it can use not only for prediction of a maintenance group error but for various safe activity, such as KYT (dangerous forecast training) and TBM (Tool Box Meeting). (3) This system predicts a cooperation error' at highest rate, and, subsequently predicts the detection error' at a high rate. And to the 'decision-making error', the transfer error' and the 'state cognitive error', it has the characteristic predicted at almost same rate. (4) If it has full knowledge even of the features, such as the enforcement conditions of maintenance work, and organization, even if the user has neither the knowledge about a human factor, nor experience, anyone of this system is slight about the existence, its extent, etc. of generating of a maintenance group error made difficult from the former logically and systematically easily, it can predict in business time for about 15 minutes. (author)

  7. Statistical modelling of networked human-automation performance using working memory capacity.

    Science.gov (United States)

    Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja

    2014-01-01

    This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models.

  8. An Assessment of the Model of Concentration Addition for Predicting the Estrogenic Activity of Chemical Mixtures in Wastewater Treatment Works Effluents

    Science.gov (United States)

    Thorpe, Karen L.; Gross-Sorokin, Melanie; Johnson, Ian; Brighty, Geoff; Tyler, Charles R.

    2006-01-01

    The effects of simple mixtures of chemicals, with similar mechanisms of action, can be predicted using the concentration addition model (CA). The ability of this model to predict the estrogenic effects of more complex mixtures such as effluent discharges, however, has yet to be established. Effluents from 43 U.K. wastewater treatment works were analyzed for the presence of the principal estrogenic chemical contaminants, estradiol, estrone, ethinylestradiol, and nonylphenol. The measured concentrations were used to predict the estrogenic activity of each effluent, employing the model of CA, based on the relative potencies of the individual chemicals in an in vitro recombinant yeast estrogen screen (rYES) and a short-term (14-day) in vivo rainbow trout vitellogenin induction assay. Based on the measured concentrations of the four chemicals in the effluents and their relative potencies in each assay, the calculated in vitro and in vivo responses compared well and ranged between 3.5 and 87 ng/L of estradiol equivalents (E2 EQ) for the different effluents. In the rYES, however, the measured E2 EQ concentrations in the effluents ranged between 0.65 and 43 ng E2 EQ/L, and they varied against those predicted by the CA model. Deviations in the estimation of the estrogenic potency of the effluents by the CA model, compared with the measured responses in the rYES, are likely to have resulted from inaccuracies associated with the measurement of the chemicals in the extracts derived from the complex effluents. Such deviations could also result as a consequence of interactions between chemicals present in the extracts that disrupted the activation of the estrogen response elements in the rYES. E2 EQ concentrations derived from the vitellogenic response in fathead minnows exposed to a series of effluent dilutions were highly comparable with the E2 EQ concentrations derived from assessments of the estrogenic potency of these dilutions in the rYES. Together these data support the

  9. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  10. A Comparative Study of Spectral Auroral Intensity Predictions From Multiple Electron Transport Models

    Science.gov (United States)

    Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha

    2018-01-01

    It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.

  11. Detailed physical properties prediction of pure methyl esters for biodiesel combustion modeling

    International Nuclear Information System (INIS)

    An, H.; Yang, W.M.; Maghbouli, A.; Chou, S.K.; Chua, K.J.

    2013-01-01

    Highlights: ► Group contribution methods from molecular level have been used for the prediction. ► Complete prediction of the physical properties for 5 methyl esters has been done. ► The predicted results can be very useful for biodiesel combustion modeling. ► Various models have been compared and the best model has been identified. ► Predicted properties are over large temperature ranges with excellent accuracies. -- Abstract: In order to accurately simulate the fuel spray, atomization, combustion and emission formation processes of a diesel engine fueled with biodiesel, adequate knowledge of biodiesel’s physical properties is desired. The objective of this work is to do a detailed physical properties prediction for the five major methyl esters of biodiesel for combustion modeling. The physical properties considered in this study are: normal boiling point, critical properties, vapor pressure, and latent heat of vaporization, liquid density, liquid viscosity, liquid thermal conductivity, gas diffusion coefficients and surface tension. For each physical property, the best prediction model has been identified, and very good agreements have been obtained between the predicted results and the published data where available. The calculated results can be used as key references for biodiesel combustion modeling.

  12. Deep Recurrent Model for Server Load and Performance Prediction in Data Center

    Directory of Open Access Journals (Sweden)

    Zheng Huang

    2017-01-01

    Full Text Available Recurrent neural network (RNN has been widely applied to many sequential tagging tasks such as natural language process (NLP and time series analysis, and it has been proved that RNN works well in those areas. In this paper, we propose using RNN with long short-term memory (LSTM units for server load and performance prediction. Classical methods for performance prediction focus on building relation between performance and time domain, which makes a lot of unrealistic hypotheses. Our model is built based on events (user requests, which is the root cause of server performance. We predict the performance of the servers using RNN-LSTM by analyzing the log of servers in data center which contains user’s access sequence. Previous work for workload prediction could not generate detailed simulated workload, which is useful in testing the working condition of servers. Our method provides a new way to reproduce user request sequence to solve this problem by using RNN-LSTM. Experiment result shows that our models get a good performance in generating load and predicting performance on the data set which has been logged in online service. We did experiments with nginx web server and mysql database server, and our methods can been easily applied to other servers in data center.

  13. On Practical tuning of Model Uncertainty in Wind Turbine Model Predictive Control

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Hovgaard, Tobias

    2015-01-01

    Model predictive control (MPC) has in previous works been applied on wind turbines with promising results. These results apply linear MPC, i.e., linear models linearized at different operational points depending on the wind speed. The linearized models are derived from a nonlinear first principles...... model of a wind turbine. In this paper, we investigate the impact of this approach on the performance of a wind turbine. In particular, we focus on the most non-linear operational ranges of a wind turbine. The MPC controller is designed for, tested, and evaluated at an industrial high fidelity wind...

  14. Ultra-Short-Term Wind Power Prediction Using a Hybrid Model

    Science.gov (United States)

    Mohammed, E.; Wang, S.; Yu, J.

    2017-05-01

    This paper aims to develop and apply a hybrid model of two data analytical methods, multiple linear regressions and least square (MLR&LS), for ultra-short-term wind power prediction (WPP), for example taking, Northeast China electricity demand. The data was obtained from the historical records of wind power from an offshore region, and from a wind farm of the wind power plant in the areas. The WPP achieved in two stages: first, the ratios of wind power were forecasted using the proposed hybrid method, and then the transformation of these ratios of wind power to obtain forecasted values. The hybrid model combines the persistence methods, MLR and LS. The proposed method included two prediction types, multi-point prediction and single-point prediction. WPP is tested by applying different models such as autoregressive moving average (ARMA), autoregressive integrated moving average (ARIMA) and artificial neural network (ANN). By comparing results of the above models, the validity of the proposed hybrid model is confirmed in terms of error and correlation coefficient. Comparison of results confirmed that the proposed method works effectively. Additional, forecasting errors were also computed and compared, to improve understanding of how to depict highly variable WPP and the correlations between actual and predicted wind power.

  15. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  16. Daily spillover from family to work: A test of the work-home resources model.

    Science.gov (United States)

    Du, Danyang; Derks, Daantje; Bakker, Arnold B

    2018-04-01

    The present study examines a mediated moderation model of the day-level effects of family hassles and family-work spillover (affect and cognition) on the relationship between job resources and employees' flourishing at work. Based on the work-home resources model, the authors hypothesized that demands from one domain (family) induce repetitive thoughts or negative feelings about those problems, so that individuals are not able to function optimally and to make full use of contextual resources in the other domain (work). Multilevel analyses of 108 Chinese working parents' 366 daily surveys revealed that the relationship between morning job resources and afternoon flourishing was significantly positive when previous day family hassles were low; the relationship became nonsignificant when previous day family hassles were high. In addition, as predicted, daily rumination also attenuated the relationship between morning job resources and afternoon flourishing, whereas daily affect did not. Finally, the moderating effect of previous day family hassles was mediated by daily rumination. The findings contribute to spillover theories by revealing the roles of affective and cognitive spillover from family to work. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. A predictive model for the behavior of radionuclides in lake systems

    International Nuclear Information System (INIS)

    Monte, L.

    1993-01-01

    This paper describes a predictive model for the behavior of 137Cs in lacustrine systems. The model was tested by comparing its predictions to contamination data collected in various lakes in Europe and North America. The migration of 137Cs from catchment basin and from bottom sediments to lake water was discussed in detail; these two factors influence the time behavior of contamination in lake water. The contributions to the levels of radionuclide concentrations in water, due to the above factors, generally increase in the long run. The uncertainty of the model, used as a generic tool for prediction of the levels of contamination in lake water, was evaluated. Data sets of water contamination analyzed in the present work suggest that the model uncertainty, at a 68% confidence level, is a factor 1.9

  18. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  19. Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Nikhar [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Tom, Nathan M [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-06-03

    Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalman filter and autoregressive model to evaluate model predictive control performance.

  20. Usability Prediction & Ranking of SDLC Models Using Fuzzy Hierarchical Usability Model

    Science.gov (United States)

    Gupta, Deepak; Ahlawat, Anil K.; Sagar, Kalpna

    2017-06-01

    Evaluation of software quality is an important aspect for controlling and managing the software. By such evaluation, improvements in software process can be made. The software quality is significantly dependent on software usability. Many researchers have proposed numbers of usability models. Each model considers a set of usability factors but do not cover all the usability aspects. Practical implementation of these models is still missing, as there is a lack of precise definition of usability. Also, it is very difficult to integrate these models into current software engineering practices. In order to overcome these challenges, this paper aims to define the term `usability' using the proposed hierarchical usability model with its detailed taxonomy. The taxonomy considers generic evaluation criteria for identifying the quality components, which brings together factors, attributes and characteristics defined in various HCI and software models. For the first time, the usability model is also implemented to predict more accurate usability values. The proposed system is named as fuzzy hierarchical usability model that can be easily integrated into the current software engineering practices. In order to validate the work, a dataset of six software development life cycle models is created and employed. These models are ranked according to their predicted usability values. This research also focuses on the detailed comparison of proposed model with the existing usability models.

  1. Spherical and cylindrical cavity expansion models based prediction of penetration depths of concrete targets.

    Directory of Open Access Journals (Sweden)

    Xiaochao Jin

    Full Text Available The cavity expansion theory is most widely used to predict the depth of penetration of concrete targets. The main purpose of this work is to clarify the differences between the spherical and cylindrical cavity expansion models and their scope of application in predicting the penetration depths of concrete targets. The factors that influence the dynamic cavity expansion process of concrete materials were first examined. Based on numerical results, the relationship between expansion pressure and velocity was established. Then the parameters in the Forrestal's formula were fitted to have a convenient and effective prediction of the penetration depth. Results showed that both the spherical and cylindrical cavity expansion models can accurately predict the depth of penetration when the initial velocity is lower than 800 m/s. However, the prediction accuracy decreases with the increasing of the initial velocity and diameters of the projectiles. Based on our results, it can be concluded that when the initial velocity is higher than the critical velocity, the cylindrical cavity expansion model performs better than the spherical cavity expansion model in predicting the penetration depth, while when the initial velocity is lower than the critical velocity the conclusion is quite the contrary. This work provides a basic principle for selecting the spherical or cylindrical cavity expansion model to predict the penetration depth of concrete targets.

  2. Determinants of work ability and its predictive value for disability

    NARCIS (Netherlands)

    Alavinia, S. M.; de Boer, A. G. E. M.; Van Duivenbooden, J. C.; Frings-Dresen, M. H. W.; Burdorf, A.

    2009-01-01

    Background Maintaining the ability of workers to cope with physical and psychosocial demands at work becomes increasingly important in prolonging working life. Aims To analyse the effects of work-related factors and individual characteristics on work ability and to determine the predictive value of

  3. What Is the Predictive Value of Animal Models for Vaccine Efficacy in Humans? Consideration of Strategies to Improve the Value of Animal Models.

    Science.gov (United States)

    Herati, Ramin Sedaghat; Wherry, E John

    2018-04-02

    Animal models are an essential feature of the vaccine design toolkit. Although animal models have been invaluable in delineating the mechanisms of immune function, their precision in predicting how well specific vaccines work in humans is often suboptimal. There are, of course, many obvious species differences that may limit animal models from predicting all details of how a vaccine works in humans. However, careful consideration of which animal models may have limitations should also allow more accurate interpretations of animal model data and more accurate predictions of what is to be expected in clinical trials. In this article, we examine some of the considerations that might be relevant to cross-species extrapolation of vaccine-related immune responses for the prediction of how vaccines will perform in humans. Copyright © 2018 Cold Spring Harbor Laboratory Press; all rights reserved.

  4. Hybrid Prediction Model of the Temperature Field of a Motorized Spindle

    Directory of Open Access Journals (Sweden)

    Lixiu Zhang

    2017-10-01

    Full Text Available The thermal characteristics of a motorized spindle are the main determinants of its performance, and influence the machining accuracy of computer numerical control machine tools. It is important to accurately predict the thermal field of a motorized spindle during its operation to improve its thermal characteristics. This paper proposes a model to predict the temperature field of a high-speed and high-precision motorized spindle under different working conditions using a finite element model and test data. The finite element model considers the influence of the parameters of the cooling system and the lubrication system, and that of environmental conditions on the coefficient of heat transfer based on test data for the surface temperature of the motorized spindle. A genetic algorithm is used to optimize the coefficient of heat transfer of the spindle, and its temperature field is predicted using a three-dimensional model that employs this optimal coefficient. A prediction model of the 170MD30 temperature field of the motorized spindle is created and simulation data for the temperature field are compared with the test data. The results show that when the speed of the spindle is 10,000 rpm, the relative mean prediction error is 1.5%, and when its speed is 15,000 rpm, the prediction error is 3.6%. Therefore, the proposed prediction model can predict the temperature field of the motorized spindle with high accuracy.

  5. Gas Emission Prediction Model of Coal Mine Based on CSBP Algorithm

    Directory of Open Access Journals (Sweden)

    Xiong Yan

    2016-01-01

    Full Text Available In view of the nonlinear characteristics of gas emission in a coal working face, a prediction method is proposed based on cuckoo search algorithm optimized BP neural network (CSBP. In the CSBP algorithm, the cuckoo search is adopted to optimize weight and threshold parameters of BP network, and obtains the global optimal solutions. Furthermore, the twelve main affecting factors of the gas emission in the coal working face are taken as input vectors of CSBP algorithm, the gas emission is acted as output vector, and then the prediction model of BP neural network with optimal parameters is established. The results show that the CSBP algorithm has batter generalization ability and higher prediction accuracy, and can be utilized effectively in the prediction of coal mine gas emission.

  6. A state-based probabilistic model for tumor respiratory motion prediction

    International Nuclear Information System (INIS)

    Kalet, Alan; Sandison, George; Schmitz, Ruth; Wu Huanmei

    2010-01-01

    This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more

  7. Assessment and prediction of air quality using fuzzy logic and autoregressive models

    Science.gov (United States)

    Carbajal-Hernández, José Juan; Sánchez-Fernández, Luis P.; Carrasco-Ochoa, Jesús A.; Martínez-Trinidad, José Fco.

    2012-12-01

    In recent years, artificial intelligence methods have been used for the treatment of environmental problems. This work, presents two models for assessment and prediction of air quality. First, we develop a new computational model for air quality assessment in order to evaluate toxic compounds that can harm sensitive people in urban areas, affecting their normal activities. In this model we propose to use a Sigma operator to statistically asses air quality parameters using their historical data information and determining their negative impact in air quality based on toxicity limits, frequency average and deviations of toxicological tests. We also introduce a fuzzy inference system to perform parameter classification using a reasoning process and integrating them in an air quality index describing the pollution levels in five stages: excellent, good, regular, bad and danger, respectively. The second model proposed in this work predicts air quality concentrations using an autoregressive model, providing a predicted air quality index based on the fuzzy inference system previously developed. Using data from Mexico City Atmospheric Monitoring System, we perform a comparison among air quality indices developed for environmental agencies and similar models. Our results show that our models are an appropriate tool for assessing site pollution and for providing guidance to improve contingency actions in urban areas.

  8. Unscented Kalman Filter-Trained Neural Networks for Slip Model Prediction

    Science.gov (United States)

    Li, Zhencai; Wang, Yang; Liu, Zhen

    2016-01-01

    The purpose of this work is to investigate the accurate trajectory tracking control of a wheeled mobile robot (WMR) based on the slip model prediction. Generally, a nonholonomic WMR may increase the slippage risk, when traveling on outdoor unstructured terrain (such as longitudinal and lateral slippage of wheels). In order to control a WMR stably and accurately under the effect of slippage, an unscented Kalman filter and neural networks (NNs) are applied to estimate the slip model in real time. This method exploits the model approximating capabilities of nonlinear state–space NN, and the unscented Kalman filter is used to train NN’s weights online. The slip parameters can be estimated and used to predict the time series of deviation velocity, which can be used to compensate control inputs of a WMR. The results of numerical simulation show that the desired trajectory tracking control can be performed by predicting the nonlinear slip model. PMID:27467703

  9. Adaptation to shift work: physiologically based modeling of the effects of lighting and shifts' start time.

    Directory of Open Access Journals (Sweden)

    Svetlana Postnova

    Full Text Available Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8 in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.

  10. Adaptation to shift work: physiologically based modeling of the effects of lighting and shifts' start time.

    Science.gov (United States)

    Postnova, Svetlana; Robinson, Peter A; Postnov, Dmitry D

    2013-01-01

    Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8) in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.

  11. Adaptation to Shift Work: Physiologically Based Modeling of the Effects of Lighting and Shifts’ Start Time

    Science.gov (United States)

    Postnova, Svetlana; Robinson, Peter A.; Postnov, Dmitry D.

    2013-01-01

    Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers’ sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8) in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers’ adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21∶00 instead of 00∶00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters. PMID:23308206

  12. Development and validation of a prediction algorithm for the onset of common mental disorders in a working population.

    Science.gov (United States)

    Fernandez, Ana; Salvador-Carulla, Luis; Choi, Isabella; Calvo, Rafael; Harvey, Samuel B; Glozier, Nicholas

    2018-01-01

    Common mental disorders are the most common reason for long-term sickness absence in most developed countries. Prediction algorithms for the onset of common mental disorders may help target indicated work-based prevention interventions. We aimed to develop and validate a risk algorithm to predict the onset of common mental disorders at 12 months in a working population. We conducted a secondary analysis of the Household, Income and Labour Dynamics in Australia Survey, a longitudinal, nationally representative household panel in Australia. Data from the 6189 working participants who did not meet the criteria for a common mental disorders at baseline were non-randomly split into training and validation databases, based on state of residence. Common mental disorders were assessed with the mental component score of 36-Item Short Form Health Survey questionnaire (score ⩽45). Risk algorithms were constructed following recommendations made by the Transparent Reporting of a multivariable prediction model for Prevention Or Diagnosis statement. Different risk factors were identified among women and men for the final risk algorithms. In the training data, the model for women had a C-index of 0.73 and effect size (Hedges' g) of 0.91. In men, the C-index was 0.76 and the effect size was 1.06. In the validation data, the C-index was 0.66 for women and 0.73 for men, with positive predictive values of 0.28 and 0.26, respectively Conclusion: It is possible to develop an algorithm with good discrimination for the onset identifying overall and modifiable risks of common mental disorders among working men. Such models have the potential to change the way that prevention of common mental disorders at the workplace is conducted, but different models may be required for women.

  13. Evaluating the reliability of predictions made using environmental transfer models

    International Nuclear Information System (INIS)

    1989-01-01

    The development and application of mathematical models for predicting the consequences of releases of radionuclides into the environment from normal operations in the nuclear fuel cycle and in hypothetical accident conditions has increased dramatically in the last two decades. This Safety Practice publication has been prepared to provide guidance on the available methods for evaluating the reliability of environmental transfer model predictions. It provides a practical introduction of the subject and a particular emphasis has been given to worked examples in the text. It is intended to supplement existing IAEA publications on environmental assessment methodology. 60 refs, 17 figs, 12 tabs

  14. Qualitative and quantitative guidelines for the comparison of environmental model predictions

    International Nuclear Information System (INIS)

    Scott, M.

    1995-03-01

    agricultural produce, as well as estimates of whole body concentrations. The observed data for the quantities of interest were typically summarised in the form of a 95% confidence interval for the mean, where the underlying distribution of the observations was assumed to be log-normal. Within the VAMP project, two sets of model predictions were provided, but in this work, only the final predictions have been used. The model predictions have not been assumed to have log-normal distributions. It is of interest to note that there was a considerable reduction in the range of the final set of model predictions compared to the initial set. Predictions were provided from 11 different models and some of these have been used in this analysis. Unfortunately, there were no estimates given of the uncertainties on the model predictions

  15. Do workaholism and work engagement predict employee well-being and performance in opposite directions?

    Science.gov (United States)

    Shimazu, Akihito; Schaufeli, Wilmar B; Kubota, Kazumi; Kawakami, Norito

    2012-01-01

    This study investigated the distinctiveness between workaholism and work engagement by examining their longitudinal relationships (measurement interval=7 months) with well-being and performance in a sample of 1,967 Japanese employees from various occupations. Based on a previous cross-sectional study (Shimazu & Schaufeli, 2009), we expected that workaholism predicts future unwell-being (i.e., high ill-health and low life satisfaction) and poor job performance, whereas work engagement predicts future well-being (i.e., low ill-health and high life satisfaction) and superior job performance. T1-T2 changes in ill-health, life satisfaction and job performance were measured as residual scores that were then included in the structural equation model. Results showed that workaholism and work engagement were weakly and positively related to each other. In addition, workaholism was related to an increase in ill-health and to a decrease in life satisfaction. In contrast, work engagement was related to a decrease in ill-health and to increases in both life satisfaction and job performance. These findings suggest that workaholism and work engagement are two different kinds of concepts that are oppositely related to well-being and performance.

  16. Robust multi-model predictive control of multi-zone thermal plate system

    Directory of Open Access Journals (Sweden)

    Poom Jatunitanon

    2018-02-01

    Full Text Available A modern controller was designed by using the mathematical model of a multi–zone thermal plate system. An important requirement for this type of controller is that it must be able to keep the temperature set-point of each thermal zone. The mathematical model used in the design was determined through a system identification process. The results showed that when the operating condition is changed, the performance of the controller may be reduced as a result of the system parameter uncertainties. This paper proposes a weighting technique of combining the robust model predictive controller for each operating condition into a single robust multi-model predictive control. Simulation and experimental results showed that the proposed method performed better than the conventional multi-model predictive control in rise time of transient response, when used in a system designed to work over a wide range of operating conditions.

  17. Criteria for prediction of plastic instabilities for hot working processes. (Part I: Theoretical review)

    International Nuclear Information System (INIS)

    Al Omar, A.; Prado, J. M.

    2010-01-01

    Hot working processes often induce high levels of deformation at high strain rates, and impose very complex multiaxial modes of solicitation. These processes are essentially limited by apparition and development of plastic instabilities. These may be the direct cause of rapid crack propagation, which lead to a possible final rupture. The complexity of deformation modes and the simultaneous intervention of several parameters have led many researchers to develop various criteria, with different approaches, to predict the occurrence of defects and to optimize process control parameters. The aim of the present paper is to summarize the general characteristics of some instability criteria, widely used in the literature, for the prediction of plastic instabilities during hot working. It was considered appropriate to divide the work into two parts: part I presents the phenomenological criteria for the prediction of plastic instabilities, based on descriptive observation of microscopic phenomena of the deformation (strain hardening and strain rate sensitivity), and discusses the continuum criteria based on the principle of maximum rate of entropy production of irreversible thermodynamics applied to continuum mechanics of large plastic flow. Also, this part provides a bibliographical discussion among several authors with regard to the physical foundations of dynamic materials model. In part II, of the work, a comparative study has been carried out to characterize the flow instability during a hot working process of a medium carbon microalloyed using phenomenological and continuum criteria. (Author) 83 refs.

  18. Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  19. Modeling the Temporal Nature of Human Behavior for Demographics Prediction

    DEFF Research Database (Denmark)

    Felbo, Bjarke; Sundsøy, Pål; Pentland, Alex

    2017-01-01

    Mobile phone metadata is increasingly used for humanitarian purposes in developing countries as traditional data is scarce. Basic demographic information is however often absent from mobile phone datasets, limiting the operational impact of the datasets. For these reasons, there has been a growing...... interest in predicting demographic information from mobile phone metadata. Previous work focused on creating increasingly advanced features to be modeled with standard machine learning algorithms. We here instead model the raw mobile phone metadata directly using deep learning, exploiting the temporal...... on both age and gender prediction using only the temporal modality in mobile metadata. We finally validate our method on low activity users and evaluate the modeling assumptions....

  20. Medication Reconciliation: Work Domain Ontology, prototype development, and a predictive model.

    Science.gov (United States)

    Markowitz, Eliz; Bernstam, Elmer V; Herskovic, Jorge; Zhang, Jiajie; Shneiderman, Ben; Plaisant, Catherine; Johnson, Todd R

    2011-01-01

    Medication errors can result from administration inaccuracies at any point of care and are a major cause for concern. To develop a successful Medication Reconciliation (MR) tool, we believe it necessary to build a Work Domain Ontology (WDO) for the MR process. A WDO defines the explicit, abstract, implementation-independent description of the task by separating the task from work context, application technology, and cognitive architecture. We developed a prototype based upon the WDO and designed to adhere to standard principles of interface design. The prototype was compared to Legacy Health System's and Pre-Admission Medication List Builder MR tools via a Keystroke-Level Model analysis for three MR tasks. The analysis found the prototype requires the fewest mental operations, completes tasks in the fewest steps, and completes tasks in the least amount of time. Accordingly, we believe that developing a MR tool, based upon the WDO and user interface guidelines, improves user efficiency and reduces cognitive load.

  1. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  2. Formability prediction for AHSS materials using damage models

    Science.gov (United States)

    Amaral, R.; Santos, Abel D.; José, César de Sá; Miranda, Sara

    2017-05-01

    Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches.

  3. Formability prediction for AHSS materials using damage models

    International Nuclear Information System (INIS)

    Amaral, R.; Miranda, Sara; Santos, Abel D.; José, César de Sá

    2017-01-01

    Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches. (paper)

  4. Model for prediction of strip temperature in hot strip steel mill

    International Nuclear Information System (INIS)

    Panjkovic, Vladimir

    2007-01-01

    Proper functioning of set-up models in a hot strip steel mill requires reliable prediction of strip temperature. Temperature prediction is particularly important for accurate calculation of rolling force because of strong dependence of yield stress and strip microstructure on temperature. A comprehensive model was developed to replace an obsolete model in the Western Port hot strip mill of BlueScope Steel. The new model predicts the strip temperature evolution from the roughing mill exit to the finishing mill exit. It takes into account the radiative and convective heat losses, forced flow boiling and film boiling of water at strip surface, deformation heat in the roll gap, frictional sliding heat, heat of scale formation and the heat transfer between strip and work rolls through an oxide layer. The significance of phase transformation was also investigated. Model was tested with plant measurements and benchmarked against other models in the literature, and its performance was very good

  5. Model for prediction of strip temperature in hot strip steel mill

    Energy Technology Data Exchange (ETDEWEB)

    Panjkovic, Vladimir [BlueScope Steel, TEOB, 1 Bayview Road, Hastings Vic. 3915 (Australia)]. E-mail: Vladimir.Panjkovic@BlueScopeSteel.com

    2007-10-15

    Proper functioning of set-up models in a hot strip steel mill requires reliable prediction of strip temperature. Temperature prediction is particularly important for accurate calculation of rolling force because of strong dependence of yield stress and strip microstructure on temperature. A comprehensive model was developed to replace an obsolete model in the Western Port hot strip mill of BlueScope Steel. The new model predicts the strip temperature evolution from the roughing mill exit to the finishing mill exit. It takes into account the radiative and convective heat losses, forced flow boiling and film boiling of water at strip surface, deformation heat in the roll gap, frictional sliding heat, heat of scale formation and the heat transfer between strip and work rolls through an oxide layer. The significance of phase transformation was also investigated. Model was tested with plant measurements and benchmarked against other models in the literature, and its performance was very good.

  6. Medium- and Long-term Prediction of LOD Change with the Leap-step Autoregressive Model

    Science.gov (United States)

    Liu, Q. B.; Wang, Q. J.; Lei, M. F.

    2015-09-01

    It is known that the accuracies of medium- and long-term prediction of changes of length of day (LOD) based on the combined least-square and autoregressive (LS+AR) decrease gradually. The leap-step autoregressive (LSAR) model is more accurate and stable in medium- and long-term prediction, therefore it is used to forecast the LOD changes in this work. Then the LOD series from EOP 08 C04 provided by IERS (International Earth Rotation and Reference Systems Service) is used to compare the effectiveness of the LSAR and traditional AR methods. The predicted series resulted from the two models show that the prediction accuracy with the LSAR model is better than that from AR model in medium- and long-term prediction.

  7. Do Work Characteristics Predict Health Deterioration Among Employees with Chronic Diseases?

    Science.gov (United States)

    de Wind, Astrid; Boot, Cécile R L; Sewdas, Ranu; Scharn, Micky; van den Heuvel, Swenne G; van der Beek, Allard J

    2017-06-29

    Purpose In our ageing workforce, the increasing numbers of employees with chronic diseases are encouraged to prolong their working lives. It is important to prevent health deterioration in this vulnerable group. This study aims to investigate whether work characteristics predict health deterioration over a 3-year period among employees with (1) chronic diseases, and, more specifically, (2) musculoskeletal and psychological disorders. Methods The study population consisted of 5600 employees aged 45-64 years with a chronic disease, who participated in the Dutch Study on Transitions in Employment, Ability and Motivation (STREAM). Information on work characteristics was derived from the baseline questionnaire. Health deterioration was defined as a decrease in general health (SF-12) between baseline and follow-up (1-3 years). Crude and adjusted logistic regression analyses were performed to investigate prediction of health deterioration by work characteristics. Subgroup analyses were performed for employees with musculoskeletal and psychological disorders. Results At follow-up, 19.2% of the employees reported health deterioration (N = 1075). Higher social support of colleagues or supervisor predicted health deterioration in the crude analyses in the total group, and the groups with either musculoskeletal or psychological disorders (ORs 1.11-1.42). This effect was not found anymore in the adjusted analyses. The other work characteristics did not predict health deterioration in any group. Conclusions This study did not support our hypothesis that work characteristics predict health deterioration among employees with chronic diseases. As our study population succeeded continuing employment to 45 years and beyond, it was probably a relatively healthy selection of employees.

  8. Bridge Deterioration Prediction Model Based On Hybrid Markov-System Dynamic

    Directory of Open Access Journals (Sweden)

    Widodo Soetjipto Jojok

    2017-01-01

    Full Text Available Instantaneous bridge failure tends to increase in Indonesia. To mitigate this condition, Indonesia’s Bridge Management System (I-BMS has been applied to continuously monitor the condition of bridges. However, I-BMS only implements visual inspection for maintenance priority of the bridge structure component instead of bridge structure system. This paper proposes a new bridge failure prediction model based on hybrid Markov-System Dynamic (MSD. System dynamic is used to represent the correlation among bridge structure components while Markov chain is used to calculate temporal probability of the bridge failure. Around 235 data of bridges in Indonesia were collected from Directorate of Bridge the Ministry of Public Works and Housing for calculating transition probability of the model. To validate the model, a medium span concrete bridge was used as a case study. The result shows that the proposed model can accurately predict the bridge condition. Besides predicting the probability of the bridge failure, this model can also be used as an early warning system for bridge monitoring activity.

  9. PNN-based Rockburst Prediction Model and Its Applications

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2017-07-01

    Full Text Available Rock burst is one of main engineering geological problems significantly threatening the safety of construction. Prediction of rock burst is always an important issue concerning the safety of workers and equipment in tunnels. In this paper, a novel PNN-based rock burst prediction model is proposed to determine whether rock burst will happen in the underground rock projects and how much the intensity of rock burst is. The probabilistic neural network (PNN is developed based on Bayesian criteria of multivariate pattern classification. Because PNN has the advantages of low training complexity, high stability, quick convergence, and simple construction, it can be well applied in the prediction of rock burst. Some main control factors, such as rocks’ maximum tangential stress, rocks’ uniaxial compressive strength, rocks’ uniaxial tensile strength, and elastic energy index of rock are chosen as the characteristic vector of PNN. PNN model is obtained through training data sets of rock burst samples which come from underground rock project in domestic and abroad. Other samples are tested with the model. The testing results agree with the practical records. At the same time, two real-world applications are used to verify the proposed method. The results of prediction are same as the results of existing methods, just same as what happened in the scene, which verifies the effectiveness and applicability of our proposed work.

  10. Text Comprehension Mediates Morphological Awareness, Syntactic Processing, and Working Memory in Predicting Chinese Written Composition Performance

    Science.gov (United States)

    Guan, Connie Qun; Ye, Feifei; Wagner, Richard K.; Meng, Wanjin; Leong, Che Kan

    2014-01-01

    The goal of the present study was to test opposing views about four issues concerning predictors of individual differences in Chinese written composition: (a) Whether morphological awareness, syntactic processing, and working memory represent distinct and measureable constructs in Chinese or are just manifestations of general language ability; (b) whether they are important predictors of Chinese written composition, and if so, the relative magnitudes and independence of their predictive relations; (c) whether observed predictive relations are mediated by text comprehension; and (d) whether these relations vary or are developmentally invariant across three years of writing development. Based on analyses of the performance of students in grades 4 (n = 246), 5 (n = 242) and 6 (n = 261), the results supported morphological awareness, syntactic processing, and working memory as distinct yet correlated abilities that made independent contributions to predicting Chinese written composition, with working memory as the strongest predictor. However, predictive relations were mediated by text comprehension. The final model accounted for approximately 75 percent of the variance in Chinese written composition. The results were largely developmentally invariant across the three grades from which participants were drawn. PMID:25530630

  11. Wake-Model Effects on Induced Drag Prediction of Staggered Boxwings

    Directory of Open Access Journals (Sweden)

    Julian Schirra

    2018-01-01

    Full Text Available For staggered boxwings the predictions of induced drag that rely on common potential-flow methods can be of limited accuracy. For example, linear, freestream-fixed wake models cannot resolve effects related to wake deflection and roll-up, which can have significant affects on the induced drag projection of these systems. The present work investigates the principle impact of wake modelling on the accuracy of induced drag prediction of boxwings with stagger. The study compares induced drag predictions of a higher-order potential-flow method that uses fixed and relaxed-wake models, and of an Euler-flow method. Positive-staggered systems at positive angles of attack are found to be particularly prone to higher-order wake effects due to vertical contraction of wakes trajectories, which results in smaller effective height-to-span ratios than compared with negative stagger and thus closer interactions between trailing wakes and lifting surfaces. Therefore, when trying to predict induced drag of positive staggered boxwings, only a potential-flow method with a fully relaxed-wake model will provide the high-degree of accuracy that rivals that of an Euler method while being computationally significantly more efficient.

  12. Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model

    Science.gov (United States)

    Wang, Qijie

    2015-08-01

    The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.

  13. Deception and Cognitive Load: Expanding our Horizon with a Working Memory Model

    Directory of Open Access Journals (Sweden)

    Siegfried Ludwig Sporer

    2016-04-01

    Full Text Available Deception and Cognitive Load: Expanding our Horizon with a Working Memory ModelAbstractRecently, studies on deception and its detection have increased dramatically. Many of these studies rely on the cognitive load approach as the sole explanatory principle to understand deception. These studies have been exclusively on lies about negative actions (usually lies of suspects of [mock] crimes. Instead, we need to re-focus more generally on the cognitive processes involved in generating both lies and truths, not just on manipulations of cognitive load. Using Baddeley's (2000, 2007, 2012 working memory model, which integrates verbal and visual processes in working memory with retrieval from long-term memory and control of action, not only verbal content cues but also nonverbal, paraverbal and linguistic cues can be investigated within a single framework. The proposed model considers long-term semantic, episodic and autobiographical memory and their connections with working memory and action. It also incorporates ironic processes of mental control (Wegner, 1994, 2009, the role of scripts and schemata and retrieval cues and retrieval processes. Specific predictions of the model are outlined and support from selective studies is presented. The model is applicable to different types of reports, particularly about lies and truths about complex events, and to different modes of production (oral, hand-written, typed. Predictions regarding several moderator variables and methods to investigate them are proposed.

  14. A consensus approach for estimating the predictive accuracy of dynamic models in biology.

    Science.gov (United States)

    Villaverde, Alejandro F; Bongard, Sophia; Mauch, Klaus; Müller, Dirk; Balsa-Canto, Eva; Schmid, Joachim; Banga, Julio R

    2015-04-01

    Mathematical models that predict the complex dynamic behaviour of cellular networks are fundamental in systems biology, and provide an important basis for biomedical and biotechnological applications. However, obtaining reliable predictions from large-scale dynamic models is commonly a challenging task due to lack of identifiability. The present work addresses this challenge by presenting a methodology for obtaining high-confidence predictions from dynamic models using time-series data. First, to preserve the complex behaviour of the network while reducing the number of estimated parameters, model parameters are combined in sets of meta-parameters, which are obtained from correlations between biochemical reaction rates and between concentrations of the chemical species. Next, an ensemble of models with different parameterizations is constructed and calibrated. Finally, the ensemble is used for assessing the reliability of model predictions by defining a measure of convergence of model outputs (consensus) that is used as an indicator of confidence. We report results of computational tests carried out on a metabolic model of Chinese Hamster Ovary (CHO) cells, which are used for recombinant protein production. Using noisy simulated data, we find that the aggregated ensemble predictions are on average more accurate than the predictions of individual ensemble models. Furthermore, ensemble predictions with high consensus are statistically more accurate than ensemble predictions with large variance. The procedure provides quantitative estimates of the confidence in model predictions and enables the analysis of sufficiently complex networks as required for practical applications. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  15. Modelling of unsteady airfoil aerodynamics for the prediction of blade standstill vibrations

    OpenAIRE

    Skrzypinski, Witold Robert; Gaunaa, Mac; Sørensen, Niels N.; Zahle, Frederik

    2012-01-01

    In the present work, CFD simulations of the DU96-W-180 airfoil at 26 and 24 deg. angles of attack were performed. 2D RANS and 3D DES computations with non-moving and prescribed motion airfoil suspensions were carried out. The openings of the lift coefficient loops predicted by CFD were different than those predicted by engineering models. The average lift slope of the loops from the 3D CFD had opposite sign than the one from 2D CFD. Trying to model the 3D behaviour with the engineering models...

  16. A theoretical model for predicting the Peak Cutting Force of conical picks

    Directory of Open Access Journals (Sweden)

    Gao Kuidong

    2014-01-01

    Full Text Available In order to predict the PCF (Peak Cutting Force of conical pick in rock cutting process, a theoretical model is established based on elastic fracture mechanics theory. The vertical fracture model of rock cutting fragment is also established based on the maximum tensile criterion. The relation between vertical fracture angle and associated parameters (cutting parameter  and ratio B of rock compressive strength to tensile strength is obtained by numerical analysis method and polynomial regression method, and the correctness of rock vertical fracture model is verified through experiments. Linear regression coefficient between the PCF of prediction and experiments is 0.81, and significance level less than 0.05 shows that the model for predicting the PCF is correct and reliable. A comparative analysis between the PCF obtained from this model and Evans model reveals that the result of this prediction model is more reliable and accurate. The results of this work could provide some guidance for studying the rock cutting theory of conical pick and designing the cutting mechanism.

  17. The predictive mind and the experience of visual art work.

    Science.gov (United States)

    Kesner, Ladislav

    2014-01-01

    Among the main challenges of the predictive brain/mind concept is how to link prediction at the neural level to prediction at the cognitive-psychological level and finding conceptually robust and empirically verifiable ways to harness this theoretical framework toward explaining higher-order mental and cognitive phenomena, including the subjective experience of aesthetic and symbolic forms. Building on the tentative prediction error account of visual art, this article extends the application of the predictive coding framework to the visual arts. It does so by linking this theoretical discussion to a subjective, phenomenological account of how a work of art is experienced. In order to engage more deeply with a work of art, viewers must be able to tune or adapt their prediction mechanism to recognize art as a specific class of objects whose ontological nature defies predictability, and they must be able to sustain a productive flow of predictions from low-level sensory, recognitional to abstract semantic, conceptual, and affective inferences. The affective component of the process of predictive error optimization that occurs when a viewer enters into dialog with a painting is constituted both by activating the affective affordances within the image and by the affective consequences of prediction error minimization itself. The predictive coding framework also has implications for the problem of the culturality of vision. A person's mindset, which determines what top-down expectations and predictions are generated, is co-constituted by culture-relative skills and knowledge, which form hyperpriors that operate in the perception of art.

  18. The predictive mind and the experience of visual art work

    Science.gov (United States)

    Kesner, Ladislav

    2014-01-01

    Among the main challenges of the predictive brain/mind concept is how to link prediction at the neural level to prediction at the cognitive-psychological level and finding conceptually robust and empirically verifiable ways to harness this theoretical framework toward explaining higher-order mental and cognitive phenomena, including the subjective experience of aesthetic and symbolic forms. Building on the tentative prediction error account of visual art, this article extends the application of the predictive coding framework to the visual arts. It does so by linking this theoretical discussion to a subjective, phenomenological account of how a work of art is experienced. In order to engage more deeply with a work of art, viewers must be able to tune or adapt their prediction mechanism to recognize art as a specific class of objects whose ontological nature defies predictability, and they must be able to sustain a productive flow of predictions from low-level sensory, recognitional to abstract semantic, conceptual, and affective inferences. The affective component of the process of predictive error optimization that occurs when a viewer enters into dialog with a painting is constituted both by activating the affective affordances within the image and by the affective consequences of prediction error minimization itself. The predictive coding framework also has implications for the problem of the culturality of vision. A person’s mindset, which determines what top–down expectations and predictions are generated, is co-constituted by culture-relative skills and knowledge, which form hyperpriors that operate in the perception of art. PMID:25566111

  19. Long-term orbit prediction for Tiangong-1 spacecraft using the mean atmosphere model

    Science.gov (United States)

    Tang, Jingshi; Liu, Lin; Cheng, Haowen; Hu, Songjie; Duan, Jianfeng

    2015-03-01

    China is planning to complete its first space station by 2020. For the long-term management and maintenance, the orbit of the space station needs to be predicted for a long period of time. Since the space station is expected to work in a low-Earth orbit, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 20 days, the error in the a priori atmosphere model, if not properly corrected, could induce a semi-major axis error of up to a few kilometers and an overall position error of several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSISE00. The a priori reference mean density can be corrected during the orbit determination. For the long-term orbit prediction, we use sufficiently long period of observations and obtain a series of the diurnal mean densities. This series contains the recent variation of the atmosphere density and can be analyzed for various periodic components. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. Here we carry out the test with China's Tiangong-1 spacecraft at the altitude of about 340 km and we show that this method is simple and flexible. The densities predicted with this approach can serve in the long-term orbit prediction. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700 m and overall position errors better than 400 km.

  20. Integrated modeling of second phase precipitation in cold-worked 316 stainless steels under irradiation

    International Nuclear Information System (INIS)

    Mamivand, Mahmood; Yang, Ying; Busby, Jeremy T.; Morgan, Dane

    2017-01-01

    The current work combines the Cluster Dynamics (CD) technique and CALPHAD-based precipitation modeling to address the second phase precipitation in cold-worked (CW) 316 stainless steels (SS) under irradiation at 300–400 °C. CD provides the radiation enhanced diffusion and dislocation evolution as inputs for the precipitation model. The CALPHAD-based precipitation model treats the nucleation, growth and coarsening of precipitation processes based on classical nucleation theory and evolution equations, and simulates the composition, size and size distribution of precipitate phases. We benchmark the model against available experimental data at fast reactor conditions (9.4 × 10"–"7 dpa/s and 390 °C) and then use the model to predict the phase instability of CW 316 SS under light water reactor (LWR) extended life conditions (7 × 10"–"8 dpa/s and 275 °C). The model accurately predicts the γ' (Ni_3Si) precipitation evolution under fast reactor conditions and that the formation of this phase is dominated by radiation enhanced segregation. The model also predicts a carbide volume fraction that agrees well with available experimental data from a PWR reactor but is much higher than the volume fraction observed in fast reactors. We propose that radiation enhanced dissolution and/or carbon depletion at sinks that occurs at high flux could be the main sources of this inconsistency. The integrated model predicts ~1.2% volume fraction for carbide and ~3.0% volume fraction for γ' for typical CW 316 SS (with 0.054 wt% carbon) under LWR extended life conditions. Finally, this work provides valuable insights into the magnitudes and mechanisms of precipitation in irradiated CW 316 SS for nuclear applications.

  1. New phenomenological and differential model for hot working of metallic polycrystalline materials

    International Nuclear Information System (INIS)

    Castellanos, J.; Munoz, J.; Gutierrez, V.; Rieiro, I.; Ruano, O. A.; Carsi, M.

    2012-01-01

    This paper presents a new phenomenological and differential model (that use differential equations) to predict the flow stress of a metallic polycrystalline material under hot working. The model, called MCC, depends on six parameters and uses two internal variables to consider the strain hardening, dynamic recovery and dynamic recrystallization processes that occur under hot working. The experimental validation of the MCC model has been carried out by means of stress-strain curves from torsion tests at high temperature (900 degree centigrade a 1200 degree centigrade) and moderate high strain rate (0.005 s-1 to 5 s-1) in a high nitrogen steel. The results reveal the very good agreement between experimental and predicted stresses. Furthermore, the Garofalo a-parameter and the strain to reach 50 % of recrystallized volume fraction have been employed as a control check being a first step to the physical interpretation of variables and parameters of the MCC model. (Author) 26 refs.

  2. Linear and nonlinear models for predicting fish bioconcentration factors for pesticides.

    Science.gov (United States)

    Yuan, Jintao; Xie, Chun; Zhang, Ting; Sun, Jinfang; Yuan, Xuejie; Yu, Shuling; Zhang, Yingbiao; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu

    2016-08-01

    This work is devoted to the applications of the multiple linear regression (MLR), multilayer perceptron neural network (MLP NN) and projection pursuit regression (PPR) to quantitative structure-property relationship analysis of bioconcentration factors (BCFs) of pesticides tested on Bluegill (Lepomis macrochirus). Molecular descriptors of a total of 107 pesticides were calculated with the DRAGON Software and selected by inverse enhanced replacement method. Based on the selected DRAGON descriptors, a linear model was built by MLR, nonlinear models were developed using MLP NN and PPR. The robustness of the obtained models was assessed by cross-validation and external validation using test set. Outliers were also examined and deleted to improve predictive power. Comparative results revealed that PPR achieved the most accurate predictions. This study offers useful models and information for BCF prediction, risk assessment, and pesticide formulation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Nikhar; Tom, Nathan

    2017-09-01

    Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalman filter and autoregressive model to evaluate model predictive control performance.

  4. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  5. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  6. Hierarchical spatial models for predicting pygmy rabbit distribution and relative abundance

    Science.gov (United States)

    Wilson, T.L.; Odei, J.B.; Hooten, M.B.; Edwards, T.C.

    2010-01-01

    Conservationists routinely use species distribution models to plan conservation, restoration and development actions, while ecologists use them to infer process from pattern. These models tend to work well for common or easily observable species, but are of limited utility for rare and cryptic species. This may be because honest accounting of known observation bias and spatial autocorrelation are rarely included, thereby limiting statistical inference of resulting distribution maps. We specified and implemented a spatially explicit Bayesian hierarchical model for a cryptic mammal species (pygmy rabbit Brachylagus idahoensis). Our approach used two levels of indirect sign that are naturally hierarchical (burrows and faecal pellets) to build a model that allows for inference on regression coefficients as well as spatially explicit model parameters. We also produced maps of rabbit distribution (occupied burrows) and relative abundance (number of burrows expected to be occupied by pygmy rabbits). The model demonstrated statistically rigorous spatial prediction by including spatial autocorrelation and measurement uncertainty. We demonstrated flexibility of our modelling framework by depicting probabilistic distribution predictions using different assumptions of pygmy rabbit habitat requirements. Spatial representations of the variance of posterior predictive distributions were obtained to evaluate heterogeneity in model fit across the spatial domain. Leave-one-out cross-validation was conducted to evaluate the overall model fit. Synthesis and applications. Our method draws on the strengths of previous work, thereby bridging and extending two active areas of ecological research: species distribution models and multi-state occupancy modelling. Our framework can be extended to encompass both larger extents and other species for which direct estimation of abundance is difficult. ?? 2010 The Authors. Journal compilation ?? 2010 British Ecological Society.

  7. Resource allocation models of auditory working memory.

    Science.gov (United States)

    Joseph, Sabine; Teki, Sundeep; Kumar, Sukhbinder; Husain, Masud; Griffiths, Timothy D

    2016-06-01

    Auditory working memory (WM) is the cognitive faculty that allows us to actively hold and manipulate sounds in mind over short periods of time. We develop here a particular perspective on WM for non-verbal, auditory objects as well as for time based on the consideration of possible parallels to visual WM. In vision, there has been a vigorous debate on whether WM capacity is limited to a fixed number of items or whether it represents a limited resource that can be allocated flexibly across items. Resource allocation models predict that the precision with which an item is represented decreases as a function of total number of items maintained in WM because a limited resource is shared among stored objects. We consider here auditory work on sequentially presented objects of different pitch as well as time intervals from the perspective of dynamic resource allocation. We consider whether the working memory resource might be determined by perceptual features such as pitch or timbre, or bound objects comprising multiple features, and we speculate on brain substrates for these behavioural models. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Job stress models for predicting burnout syndrome: a review.

    Science.gov (United States)

    Chirico, Francesco

    2016-01-01

    In Europe, the Council Directive 89/391 for improvement of workers' safety and health has emphasized the importance of addressing all occupational risk factors, and hence also psychosocial and organizational risk factors. Nevertheless, the construct of "work-related stress" elaborated from EU-OSHA is not totally corresponding with the "psychosocial" risk, that is a broader category of risk, comprising various and different psychosocial risk factors. The term "burnout", without any binding definition, tries to integrate symptoms as well as cause of the burnout process. In Europe, the most important methods developed for the work related stress risk assessment are based on the Cox's transactional model of job stress. Nevertheless, there are more specific models for predicting burnout syndrome. This literature review provides an overview of job burnout, highlighting the most important models of job burnout, such as the Job Strain, the Effort/Reward Imbalance and the Job Demands-Resources models. The difference between these models and the Cox's model of job stress is explored.

  9. Model predictive control of the solid oxide fuel cell stack temperature with models based on experimental data

    Science.gov (United States)

    Pohjoranta, Antti; Halinen, Matias; Pennanen, Jari; Kiviaho, Jari

    2015-03-01

    Generalized predictive control (GPC) is applied to control the maximum temperature in a solid oxide fuel cell (SOFC) stack and the temperature difference over the stack. GPC is a model predictive control method and the models utilized in this work are ARX-type (autoregressive with extra input), multiple input-multiple output, polynomial models that were identified from experimental data obtained from experiments with a complete SOFC system. The proposed control is evaluated by simulation with various input-output combinations, with and without constraints. A comparison with conventional proportional-integral-derivative (PID) control is also made. It is shown that if only the stack maximum temperature is controlled, a standard PID controller can be used to obtain output performance comparable to that obtained with the significantly more complex model predictive controller. However, in order to control the temperature difference over the stack, both the stack minimum and the maximum temperature need to be controlled and this cannot be done with a single PID controller. In such a case the model predictive controller provides a feasible and effective solution.

  10. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  11. Predictive value and construct validity of the work functioning screener-healthcare (WFS-H)

    Science.gov (United States)

    Boezeman, Edwin J.; Nieuwenhuijsen, Karen; Sluiter, Judith K.

    2016-01-01

    Objectives: To test the predictive value and convergent construct validity of a 6-item work functioning screener (WFS-H). Methods: Healthcare workers (249 nurses) completed a questionnaire containing the work functioning screener (WFS-H) and a work functioning instrument (NWFQ) measuring the following: cognitive aspects of task execution and general incidents, avoidance behavior, conflicts and irritation with colleagues, impaired contact with patients and their family, and level of energy and motivation. Productivity and mental health were also measured. Negative and positive predictive values, AUC values, and sensitivity and specificity were calculated to examine the predictive value of the screener. Correlation analysis was used to examine the construct validity. Results: The screener had good predictive value, since the results showed that a negative screener score is a strong indicator of work functioning not hindered by mental health problems (negative predictive values: 94%-98%; positive predictive values: 21%-36%; AUC:.64-.82; sensitivity: 42%-76%; and specificity 85%-87%). The screener has good construct validity due to moderate, but significant (pvalue and good construct validity. Its score offers occupational health professionals a helpful preliminary insight into the work functioning of healthcare workers. PMID:27010085

  12. Predictive value and construct validity of the work functioning screener-healthcare (WFS-H).

    Science.gov (United States)

    Boezeman, Edwin J; Nieuwenhuijsen, Karen; Sluiter, Judith K

    2016-05-25

    To test the predictive value and convergent construct validity of a 6-item work functioning screener (WFS-H). Healthcare workers (249 nurses) completed a questionnaire containing the work functioning screener (WFS-H) and a work functioning instrument (NWFQ) measuring the following: cognitive aspects of task execution and general incidents, avoidance behavior, conflicts and irritation with colleagues, impaired contact with patients and their family, and level of energy and motivation. Productivity and mental health were also measured. Negative and positive predictive values, AUC values, and sensitivity and specificity were calculated to examine the predictive value of the screener. Correlation analysis was used to examine the construct validity. The screener had good predictive value, since the results showed that a negative screener score is a strong indicator of work functioning not hindered by mental health problems (negative predictive values: 94%-98%; positive predictive values: 21%-36%; AUC:.64-.82; sensitivity: 42%-76%; and specificity 85%-87%). The screener has good construct validity due to moderate, but significant (ppredictive value and good construct validity. Its score offers occupational health professionals a helpful preliminary insight into the work functioning of healthcare workers.

  13. Distinct work-related, clinical and psychological factors predict return to work following treatment in four different cancer types.

    Science.gov (United States)

    Cooper, Alethea F; Hankins, Matthew; Rixon, Lorna; Eaton, Emma; Grunfeld, Elizabeth A

    2013-03-01

    Many factors influence return to work (RTW) following cancer treatment. However specific factors affecting RTW across different cancer types are unclear. This study examined the role of clinical, sociodemographic, work and psychological factors in RTW following treatment for breast, gynaecological, head and neck, and urological cancer. A 12-month prospective questionnaire study was conducted with 290 patients. Cox regression analyses were conducted to calculate hazard ratios (HR) for time to RTW. Between 89-94% of cancer survivors returned to work. Breast cancer survivors took the longest to return (median 30 weeks), and urology cancer survivors returned the soonest (median 5 weeks). Earlier return among breast cancer survivors was predicted by a greater sense of control over their cancer at work (HR 1.2; 95% CI: 1.09-1.37) and by full-time work (HR 2.1; CI: 1.24-3.4). Predictive of a longer return among gynaecological cancer survivors was a belief that cancer treatment may impair ability to work (HR 0.75; CI: 0.62-0.91). Among urological cancer survivors constipation was predictive of longer RTW (HR 0.99; CI: 0.97-1.00), whereas undertaking flexible working was predictive of returning sooner (HR 1.70; CI: 1.07-2.7). Head and neck cancer survivors who perceived greater negative consequences of their cancer took longer to return (HR 0.27; CI: 0.11-0.68). Those reporting better physical functioning returned sooner (HR1.04; CI: 1.01-1.08). A different profile of predictive factors emerged for the four cancer types. In addition to optimal symptom management and workplace adaptations, the findings suggest that eliciting and challenging specific cancer and treatment-related perceptions may facilitate RTW. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  15. [Psychosocial factors at work and cardiovascular diseases: contribution of the Effort-Reward Imbalance model].

    Science.gov (United States)

    Niedhammer, I; Siegrist, J

    1998-11-01

    The effect of psychosocial factors at work on health, especially cardiovascular health, has given rise to growing concern in occupational epidemiology over the last few years. Two theoretical models, Karasek's model and the Effort-Reward Imbalance model, have been developed to evaluate psychosocial factors at work within specific conceptual frameworks in an attempt to take into account the serious methodological difficulties inherent in the evaluation of such factors. Karasek's model, the most widely used model, measures three factors: psychological demands, decision latitude and social support at work. Many studies have shown the predictive effects of these factors on cardiovascular diseases independently of well-known cardiovascular risk factors. More recently, the Effort-Reward Imbalance model takes into account the role of individual coping characteristics which was neglected in the Karasek model. The effort-reward imbalance model focuses on the reciprocity of exchange in occupational life where high-cost/low-gain conditions are considered particularly stressful. Three dimensions of rewards are distinguished: money, esteem and gratifications in terms of promotion prospects and job security. Some studies already support that high-effort/low reward-conditions are predictive of cardiovascular diseases.

  16. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  17. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Modelling of unsteady airfoil aerodynamics for the prediction of blade standstill vibrations

    DEFF Research Database (Denmark)

    Skrzypinski, Witold Robert; Gaunaa, Mac; Sørensen, Niels N.

    2012-01-01

    In the present work, CFD simulations of the DU96-W-180 airfoil at 26 and 24 deg. angles of attack were performed. 2D RANS and 3D DES computations with non-moving and prescribed motion airfoil suspensions were carried out. The openings of the lift coefficient loops predicted by CFD were different...... than those predicted by engineering models. The average lift slope of the loops from the 3D CFD had opposite sign than the one from 2D CFD. Trying to model the 3D behaviour with the engineering models proved difficult. The disagreement between the 2D CFD, 3D CFD and the engineering models indicates...

  19. A model predictive controller for the water level of nuclear steam generators

    International Nuclear Information System (INIS)

    Na, Man Gyun

    2001-01-01

    In this work, the model predictive control method was applied to a linear model and a nonlinear model of steam generators. The parameters of a linear model for steam generators are very different according to the power levels. The model predictive controller was designed for the linear steam generator model at a fixed power level. The proposed controller designed at the fixed power level showed good performance for any other power levels by changing only the input-weighting factor. As the input-weighting factor usually increases, its relative stability does so. The stem generator has some nonlinear characteristics. Therefore, the proposed algorithm has been implemented for a nonlinear model of the nuclear steam generator to verify its real performance and also, showed good performance. (author)

  20. Risk prediction model for knee pain in the Nottingham community: a Bayesian modelling approach.

    Science.gov (United States)

    Fernandes, G S; Bhattacharya, A; McWilliams, D F; Ingham, S L; Doherty, M; Zhang, W

    2017-03-20

    Twenty-five percent of the British population over the age of 50 years experiences knee pain. Knee pain can limit physical ability and cause distress and bears significant socioeconomic costs. The objectives of this study were to develop and validate the first risk prediction model for incident knee pain in the Nottingham community and validate this internally within the Nottingham cohort and externally within the Osteoarthritis Initiative (OAI) cohort. A total of 1822 participants from the Nottingham community who were at risk for knee pain were followed for 12 years. Of this cohort, two-thirds (n = 1203) were used to develop the risk prediction model, and one-third (n = 619) were used to validate the model. Incident knee pain was defined as pain on most days for at least 1 month in the past 12 months. Predictors were age, sex, body mass index, pain elsewhere, prior knee injury and knee alignment. A Bayesian logistic regression model was used to determine the probability of an OR >1. The Hosmer-Lemeshow χ 2 statistic (HLS) was used for calibration, and ROC curve analysis was used for discrimination. The OAI cohort from the United States was also used to examine the performance of the model. A risk prediction model for knee pain incidence was developed using a Bayesian approach. The model had good calibration, with an HLS of 7.17 (p = 0.52) and moderate discriminative ability (ROC 0.70) in the community. Individual scenarios are given using the model. However, the model had poor calibration (HLS 5866.28, p prediction model for knee pain, regardless of underlying structural changes of knee osteoarthritis, in the community using a Bayesian modelling approach. The model appears to work well in a community-based population but not in individuals with a higher risk for knee osteoarthritis, and it may provide a convenient tool for use in primary care to predict the risk of knee pain in the general population.

  1. The string prediction models as an invariants of time series in forex market

    OpenAIRE

    Richard Pincak; Marian Repasan

    2011-01-01

    In this paper we apply a new approach of the string theory to the real financial market. It is direct extension and application of the work [1] into prediction of prices. The models are constructed with an idea of prediction models based on the string invariants (PMBSI). The performance of PMBSI is compared to support vector machines (SVM) and artificial neural networks (ANN) on an artificial and a financial time series. Brief overview of the results and analysis is given. The first model is ...

  2. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  3. Extracting falsifiable predictions from sloppy models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  4. A Comparison of Freeway Work Zone Capacity Prediction Models

    NARCIS (Netherlands)

    Zheng, N.; Hegyi, A.; Hoogendoorn, S.P.; Van Zuylen, H.J.; Peters, D.

    2011-01-01

    To keep the freeway networks in a good condition, road works such as maintenance and reconstruction are carried out regularly. The resulting work zones including the related traffic management measures, give different traffic capacities of the infrastructures, which determines the travel time for

  5. Bayesian Calibration, Validation and Uncertainty Quantification for Predictive Modelling of Tumour Growth: A Tutorial.

    Science.gov (United States)

    Collis, Joe; Connor, Anthony J; Paczkowski, Marcin; Kannan, Pavitra; Pitt-Francis, Joe; Byrne, Helen M; Hubbard, Matthew E

    2017-04-01

    In this work, we present a pedagogical tumour growth example, in which we apply calibration and validation techniques to an uncertain, Gompertzian model of tumour spheroid growth. The key contribution of this article is the discussion and application of these methods (that are not commonly employed in the field of cancer modelling) in the context of a simple model, whose deterministic analogue is widely known within the community. In the course of the example, we calibrate the model against experimental data that are subject to measurement errors, and then validate the resulting uncertain model predictions. We then analyse the sensitivity of the model predictions to the underlying measurement model. Finally, we propose an elementary learning approach for tuning a threshold parameter in the validation procedure in order to maximize predictive accuracy of our validated model.

  6. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  7. Accelerating the Global Nested Air Quality Prediction Modeling System (GNAQPMS) model on Intel Xeon Phi processors

    OpenAIRE

    Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junming; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa

    2017-01-01

    The GNAQPMS model is the global version of the Nested Air Quality Prediction Modelling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present our work of porting and optimizing the GNAQPMS model on the second generation Intel Xeon Phi processor codename “Knights Landing” (KNL). Compared with the first generation Xeon Phi coprocessor, KNL introduced many new hardware features such as a boo...

  8. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  9. Deception and Cognitive Load: Expanding Our Horizon with a Working Memory Model.

    Science.gov (United States)

    Sporer, Siegfried L

    2016-01-01

    Recently, studies on deception and its detection have increased dramatically. Many of these studies rely on the "cognitive load approach" as the sole explanatory principle to understand deception. These studies have been exclusively on lies about negative actions (usually lies of suspects of [mock] crimes). Instead, we need to re-focus more generally on the cognitive processes involved in generating both lies and truths, not just on manipulations of cognitive load. Using Baddeley's (2000, 2007, 2012) working memory model, which integrates verbal and visual processes in working memory with retrieval from long-term memory and control of action, not only verbal content cues but also nonverbal, paraverbal, and linguistic cues can be investigated within a single framework. The proposed model considers long-term semantic, episodic and autobiographical memory and their connections with working memory and action. It also incorporates ironic processes of mental control (Wegner, 1994, 2009), the role of scripts and schemata and retrieval cues and retrieval processes. Specific predictions of the model are outlined and support from selective studies is presented. The model is applicable to different types of reports, particularly about lies and truths about complex events, and to different modes of production (oral, hand-written, typed). Predictions regarding several moderator variables and methods to investigate them are proposed.

  10. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  11. Long-term orbit prediction for China's Tiangong-1 spacecraft based on mean atmosphere model

    Science.gov (United States)

    Tang, Jingshi; Liu, Lin; Miao, Manqian

    Tiangong-1 is China's test module for future space station. It has gone through three successful rendezvous and dockings with Shenzhou spacecrafts from 2011 to 2013. For the long-term management and maintenance, the orbit sometimes needs to be predicted for a long period of time. As Tiangong-1 works in a low-Earth orbit with an altitude of about 300-400 km, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 10-20 days, the error in the a priori atmosphere model, if not properly corrected, could induce the semi-major axis error and the overall position error up to a few kilometers and several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSIS00. The a priori reference mean density can be corrected during precise orbit determination (POD). For applications in the long-term orbit prediction, the observations are first accumulated. With sufficiently long period of observations, we are able to obtain a series of the diurnal mean densities. This series bears the recent variation of the atmosphere density and can be analyzed for various periods. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. We show that the densities predicted with this approach can serve to increase the accuracy of the predicted orbit. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700m and overall position errors better than 600km.

  12. Adaptive Neuro-Fuzzy Inference System Models for Force Prediction of a Mechatronic Flexible Structure

    DEFF Research Database (Denmark)

    Achiche, S.; Shlechtingen, M.; Raison, M.

    2016-01-01

    This paper presents the results obtained from a research work investigating the performance of different Adaptive Neuro-Fuzzy Inference System (ANFIS) models developed to predict excitation forces on a dynamically loaded flexible structure. For this purpose, a flexible structure is equipped...... obtained from applying a random excitation force on the flexible structure. The performance of the developed models is evaluated by analyzing the prediction capabilities based on a normalized prediction error. The frequency domain is considered to analyze the similarity of the frequencies in the predicted...... of the sampling frequency and sensor location on the model performance is investigated. The results obtained in this paper show that ANFIS models can be used to set up reliable force predictors for dynamical loaded flexible structures, when a certain degree of inaccuracy is accepted. Furthermore, the comparison...

  13. A model to predict stream water temperature across the conterminous USA

    Science.gov (United States)

    Catalina Segura; Peter Caldwell; Ge Sun; Steve McNulty; Yang Zhang

    2014-01-01

    Stream water temperature (ts) is a critical water quality parameter for aquatic ecosystems. However, ts records are sparse or nonexistent in many river systems. In this work, we present an empirical model to predict ts at the site scale across the USA. The model, derived using data from 171 reference sites selected from the Geospatial Attributes of Gages for Evaluating...

  14. Prediction of multi-wake problems using an improved Jensen wake model

    DEFF Research Database (Denmark)

    Tian, Linlin; Zhu, Wei Jun; Shen, Wen Zhong

    2017-01-01

    The improved analytical wake model named as 2D_k Jensen model (which was proposed to overcome some shortcomes in the classical Jensen wake model) is applied and validated in this work for wind turbine multi-wake predictions. Different from the original Jensen model, this newly developed 2D_k Jensen...... model uses a cosine shape instead of the top-hat shape for the velocity deficit in the wake, and the wake decay rate as a variable that is related to the ambient turbulence as well as the rotor generated turbulence. Coupled with four different multi-wake combination models, the 2D_k Jensen model...... is assessed through (1) simulating two wakes interaction under full wake and partial wake conditions and (2) predicting the power production in the Horns Rev wind farm for different wake sectors around two different wind directions. Through comparisons with field measurements, results from Large Eddy...

  15. A new constitutive model for prediction of impact rates response of polypropylene

    Directory of Open Access Journals (Sweden)

    Buckley C.P.

    2012-08-01

    Full Text Available This paper proposes a new constitutive model for predicting the impact rates response of polypropylene. Impact rates, as used here, refer to strain rates greater than 1000 1/s. The model is a physically based, three-dimensional constitutive model which incorporates the contributions of the amorphous, crystalline, pseudo-amorphous and entanglement networks to the constitutive response of polypropylene. The model mathematics is based on the well-known Glass-Rubber model originally developed for glassy polymers but the arguments have herein been extended to semi-crystalline polymers. In order to predict the impact rates behaviour of polypropylene, the model exploits the well-known framework of multiple processes yielding of polymers. This work argues that two dominant viscoelastic relaxation processes – the alpha- and beta-processes – can be associated with the yield responses of polypropylene observed at low-rate-dominant and impact-rates dominant loading regimes. Compression test data on polypropylene have been used to validate the model. The study has found that the model predicts quite well the experimentally observed nonlinear rate-dependent impact response of polypropylene.

  16. Error analysis of short term wind power prediction models

    International Nuclear Information System (INIS)

    De Giorgi, Maria Grazia; Ficarella, Antonio; Tarantino, Marco

    2011-01-01

    The integration of wind farms in power networks has become an important problem. This is because the electricity produced cannot be preserved because of the high cost of storage and electricity production must follow market demand. Short-long-range wind forecasting over different lengths/periods of time is becoming an important process for the management of wind farms. Time series modelling of wind speeds is based upon the valid assumption that all the causative factors are implicitly accounted for in the sequence of occurrence of the process itself. Hence time series modelling is equivalent to physical modelling. Auto Regressive Moving Average (ARMA) models, which perform a linear mapping between inputs and outputs, and Artificial Neural Networks (ANNs) and Adaptive Neuro-Fuzzy Inference Systems (ANFIS), which perform a non-linear mapping, provide a robust approach to wind power prediction. In this work, these models are developed in order to forecast power production of a wind farm with three wind turbines, using real load data and comparing different time prediction periods. This comparative analysis takes in the first time, various forecasting methods, time horizons and a deep performance analysis focused upon the normalised mean error and the statistical distribution hereof in order to evaluate error distribution within a narrower curve and therefore forecasting methods whereby it is more improbable to make errors in prediction. (author)

  17. Error analysis of short term wind power prediction models

    Energy Technology Data Exchange (ETDEWEB)

    De Giorgi, Maria Grazia; Ficarella, Antonio; Tarantino, Marco [Dipartimento di Ingegneria dell' Innovazione, Universita del Salento, Via per Monteroni, 73100 Lecce (Italy)

    2011-04-15

    The integration of wind farms in power networks has become an important problem. This is because the electricity produced cannot be preserved because of the high cost of storage and electricity production must follow market demand. Short-long-range wind forecasting over different lengths/periods of time is becoming an important process for the management of wind farms. Time series modelling of wind speeds is based upon the valid assumption that all the causative factors are implicitly accounted for in the sequence of occurrence of the process itself. Hence time series modelling is equivalent to physical modelling. Auto Regressive Moving Average (ARMA) models, which perform a linear mapping between inputs and outputs, and Artificial Neural Networks (ANNs) and Adaptive Neuro-Fuzzy Inference Systems (ANFIS), which perform a non-linear mapping, provide a robust approach to wind power prediction. In this work, these models are developed in order to forecast power production of a wind farm with three wind turbines, using real load data and comparing different time prediction periods. This comparative analysis takes in the first time, various forecasting methods, time horizons and a deep performance analysis focused upon the normalised mean error and the statistical distribution hereof in order to evaluate error distribution within a narrower curve and therefore forecasting methods whereby it is more improbable to make errors in prediction. (author)

  18. Work characteristics predict the development of multi-site musculoskeletal pain

    NARCIS (Netherlands)

    Oakman, J.; Wind, A. de; Heuvel, S.G. van den; Beek, A.J. van der

    2017-01-01

    Purpose. Musculoskeletal pain in more than one body region is common and a barrier to sustaining employment. We aimed to examine whether work characteristics predict the development of multi-site pain (MSP), and to determine differences in work-related predictors between age groups. Methods. This

  19. Do psychosocial work conditions predict risk of disability pensioning? An analysis of register-based outcomes using pooled data on 40,554 observations.

    Science.gov (United States)

    Clausen, Thomas; Burr, Hermann; Borg, Vilhelm

    2014-06-01

    To investigate whether high psychosocial job demands (quantitative demands and work pace) and low psychosocial job resources (influence at work and quality of leadership) predicted risk of disability pensioning among employees in four occupational groups--employees working with customers, employees working with clients, office workers and manual workers--in line with the propositions of the Job Demands-Resources (JD-R) model. Survey data from 40,554 individuals were fitted to the DREAM register containing information on payments of disability pension. Using multi-adjusted Cox regression, observations were followed in the DREAM-register to assess risk of disability pensioning. Average follow-up time was 5.9 years (SD=3.0). Low levels of influence at work predicted an increased risk of disability pensioning and medium levels of quantitative demands predicted a decreased risk of disability pensioning in the study population. We found significant interaction effects between job demands and job resources as combinations low quality of leadership and high job demands predicted the highest rate of disability pensioning. Further analyses showed some, but no statistically significant, differences between the four occupational groups in the associations between job demands, job resources and risk of disability pensioning. The study showed that psychosocial job demands and job resources predicted risk of disability pensioning. The direction of some of the observed associations countered the expectations of the JD-R model and the findings of the present study therefore imply that associations between job demands, job resources and adverse labour market outcomes are more complex than conceptualised in the JD-R model. © 2014 the Nordic Societies of Public Health.

  20. Work characteristics predict the development of multi-site musculoskeletal pain.

    Science.gov (United States)

    Oakman, Jodi; de Wind, Astrid; van den Heuvel, Swenne G; van der Beek, Allard J

    2017-10-01

    Musculoskeletal pain in more than one body region is common and a barrier to sustaining employment. We aimed to examine whether work characteristics predict the development of multi-site pain (MSP), and to determine differences in work-related predictors between age groups. This study is based on 5136 employees from the Study on Transitions in Employment, Ability and Motivation (STREAM) who reported no MSP at baseline. Measures included physical, emotional, mental, and psychological job demands, social support and autonomy. Predictors of MSP were studied by logistic regression analyses. Univariate and multivariate analyses with age stratification (45-49, 50-54, 55-59, and 60-64 years) were done to explore differences between age groups. All work characteristics with the exception of autonomy were predictive of the development of MSP, with odds ratios varying from 1.21 (95% CI 1.04-1.40) for mental job demands to 1.63 (95% CI 1.43-1.86) for physical job demands. No clear pattern of age-related differences in the predictors of MSP emerged, with the exception of social support, which was predictive of MSP developing in all age groups except for the age group 60-64 years. Adverse physical and psychosocial work characteristics are associated with MSP. Organisations need to comprehensively assess work environments to ensure that all relevant workplace hazards, physical and psychosocial, are identified and then controlled for across all age groups.

  1. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  2. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  3. Combining multiple models to generate consensus: Application to radiation-induced pneumonitis prediction

    Energy Technology Data Exchange (ETDEWEB)

    Das, Shiva K.; Chen Shifeng; Deasy, Joseph O.; Zhou Sumin; Yin Fangfang; Marks, Lawrence B. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri 63110 (United States); Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Department of Radiation Oncology, University of North Carolina School of Medicine, Chapel Hill, North Carolina 27599 (United States)

    2008-11-15

    The fusion of predictions from disparate models has been used in several fields to obtain a more realistic and robust estimate of the ''ground truth'' by allowing the models to reinforce each other when consensus exists, or, conversely, negate each other when there is no consensus. Fusion has been shown to be most effective when the models have some complementary strengths arising from different approaches. In this work, we fuse the results from four common but methodologically different nonlinear multivariate models (Decision Trees, Neural Networks, Support Vector Machines, Self-Organizing Maps) that were trained to predict radiation-induced pneumonitis risk on a database of 219 lung cancer patients treated with radiotherapy (34 with Grade 2+ postradiotherapy pneumonitis). Each model independently incorporated a small number of features from the available set of dose and nondose patient variables to predict pneumonitis; no two models had all features in common. Fusion was achieved by simple averaging of the predictions for each patient from all four models. Since a model's prediction for a patient can be dependent on the patient training set used to build the model, the average of several different predictions from each model was used in the fusion (predictions were made by repeatedly testing each patient with a model built from different cross-validation training sets that excluded the patient being tested). The area under the receiver operating characteristics curve for the fused cross-validated results was 0.79, with lower variance than the individual component models. From the fusion, five features were extracted as the consensus among all four models in predicting radiation pneumonitis. Arranged in order of importance, the features are (1) chemotherapy; (2) equivalent uniform dose (EUD) for exponent a=1.2 to 3; (3) EUD for a=0.5 to 1.2, lung volume receiving >20-30 Gy; (4) female sex; and (5) squamous cell histology. To facilitate

  4. Predictive user modeling with actionable attributes

    NARCIS (Netherlands)

    Zliobaite, I.; Pechenizkiy, M.

    2013-01-01

    Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target

  5. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).

    Science.gov (United States)

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  6. RNA secondary structure prediction with pseudoknots: Contribution of algorithm versus energy model.

    Science.gov (United States)

    Jabbari, Hosna; Wark, Ian; Montemagno, Carlo

    2018-01-01

    RNA is a biopolymer with various applications inside the cell and in biotechnology. Structure of an RNA molecule mainly determines its function and is essential to guide nanostructure design. Since experimental structure determination is time-consuming and expensive, accurate computational prediction of RNA structure is of great importance. Prediction of RNA secondary structure is relatively simpler than its tertiary structure and provides information about its tertiary structure, therefore, RNA secondary structure prediction has received attention in the past decades. Numerous methods with different folding approaches have been developed for RNA secondary structure prediction. While methods for prediction of RNA pseudoknot-free structure (structures with no crossing base pairs) have greatly improved in terms of their accuracy, methods for prediction of RNA pseudoknotted secondary structure (structures with crossing base pairs) still have room for improvement. A long-standing question for improving the prediction accuracy of RNA pseudoknotted secondary structure is whether to focus on the prediction algorithm or the underlying energy model, as there is a trade-off on computational cost of the prediction algorithm versus the generality of the method. The aim of this work is to argue when comparing different methods for RNA pseudoknotted structure prediction, the combination of algorithm and energy model should be considered and a method should not be considered superior or inferior to others if they do not use the same scoring model. We demonstrate that while the folding approach is important in structure prediction, it is not the only important factor in prediction accuracy of a given method as the underlying energy model is also as of great value. Therefore we encourage researchers to pay particular attention in comparing methods with different energy models.

  7. Fretting wear damage of steam generator tubes and its prediction modeling

    International Nuclear Information System (INIS)

    Che Honglong; Lei Mingkai

    2013-01-01

    The steam generator is the key equipment used for the energy transition in nuclear power plant. Since the high-temperature and high-pressure fluid flows with high speed, the steam generator tubes will be excited and vibrate, leading to the tremendous fretting wear problem on the tubes, sometimes even leading to tube cracking. This paper introduces typical fretting wear cases, the result of corresponding simulation wear experiment and damage mechanism which combining mechanical wear and erosion-corrosion. Work rate model could give a reasonable life prediction about the steam generator tube, and this predictive model has been used in nuclear power plant safety assessment. (authors)

  8. Goal orientation and work role performance: predicting adaptive and proactive work role performance through self-leadership strategies.

    Science.gov (United States)

    Marques-Quinteiro, Pedro; Curral, Luís Alberto

    2012-01-01

    This article explores the relationship between goal orientation, self-leadership dimensions, and adaptive and proactive work role performances. The authors hypothesize that learning orientation, in contrast to performance orientation, positively predicts proactive and adaptive work role performances and that this relationship is mediated by self-leadership behavior-focused strategies. It is posited that self-leadership natural reward strategies and thought pattern strategies are expected to moderate this relationship. Workers (N = 108) from a software company participated in this study. As expected, learning orientation did predict adaptive and proactive work role performance. Moreover, in the relationship between learning orientation and proactive work role performance through self-leadership behavior-focused strategies, a moderated mediation effect was found for self-leadership natural reward and thought pattern strategies. In the end, the authors discuss the results and implications are discussed and future research directions are proposed.

  9. Prediction of tensile curves, at 673 K, of cold-worked and stress-relieved zircaloy-4 from creep data

    International Nuclear Information System (INIS)

    Povolo, F.; Buenos Aires Univ. Nacional; Marzocca, A.J.

    1986-01-01

    A constitutive creep equation, based on jog-drag cell-formation, is used to predict tensile curves from creep data obtained in the same material. The predicted tensile curve are compared with actual stress versus plastic strain data, obtained both in cold-work and stress-relieved specimens. Finally, it is shown that the general features of the tensile curves, at low strain rates, are described by the creep model. (orig.)

  10. Hybrid model predictive control applied to switching control of burner load for a compact marine boiler design

    DEFF Research Database (Denmark)

    Solberg, Brian; Andersen, Palle; Maciejowski, Jan

    2008-01-01

    This paper discusses the application of hybrid model predictive control to control switching between different burner modes in a novel compact marine boiler design. A further purpose of the present work is to point out problems with finite horizon model predictive control applied to systems for w...

  11. A global high-resolution model experiment on the predictability of the atmosphere

    Science.gov (United States)

    Judt, F.

    2016-12-01

    Forecasting high-impact weather phenomena is one of the most important aspects of numerical weather prediction (NWP). Over the last couple of years, a tremendous increase in computing power has facilitated the advent of global convection-resolving NWP models, which allow for the seamless prediction of weather from local to planetary scales. Unfortunately, the predictability of specific meteorological phenomena in these models is not very well known. This raises questions about which forecast problems are potentially tractable, and what is the value of global convection-resolving model predictions for the end user. To address this issue, we use the Yellowstone supercomputer to conduct a global high-resolution predictability experiment with the recently developed Model for Prediction Across Scales (MPAS). The computing power of Yellowstone enables the model to run at a globally uniform resolution of 4 km with 55 vertical levels (>2 billion grid cells). These simulations, which require 3 million core-hours for the entire experiment, allow for the explicit treatment of organized deep moist convection (i.e., thunderstorm systems). Resolving organized deep moist convection alleviates grave limitations of previous predictability studies, which either used high-resolution limited-area models or global simulations with coarser grids and cumulus parameterization. By computing the error growth characteristics in a set of "identical twin" model runs, the experiment will clarify the intrinsic predictability limits of atmospheric phenomena on a wide range of scales, from severe thunderstorms to global-scale wind patterns that affect the distribution of tropical rainfall. Although a major task by itself, this study is intended to be exploratory work for a future predictability experiment going beyond of what has so far been feasible. We hope to use CISL's new Cheyenne supercomputer to conduct a similar predictability experiments on a global mesh with 1-2 km resolution. This

  12. Preliminary results from a four-working space, double-acting piston, Stirling engine controls model

    Science.gov (United States)

    Daniele, C. J.; Lorenzo, C. F.

    1980-01-01

    A four working space, double acting piston, Stirling engine simulation is being developed for controls studies. The development method is to construct two simulations, one for detailed fluid behavior, and a second model with simple fluid behaviour but containing the four working space aspects and engine inertias, validate these models separately, then upgrade the four working space model by incorporating the detailed fluid behaviour model for all four working spaces. The single working space (SWS) model contains the detailed fluid dynamics. It has seven control volumes in which continuity, energy, and pressure loss effects are simulated. Comparison of the SWS model with experimental data shows reasonable agreement in net power versus speed characteristics for various mean pressure levels in the working space. The four working space (FWS) model was built to observe the behaviour of the whole engine. The drive dynamics and vehicle inertia effects are simulated. To reduce calculation time, only three volumes are used in each working space and the gas temperature are fixed (no energy equation). Comparison of the FWS model predicted power with experimental data shows reasonable agreement. Since all four working spaces are simulated, the unique capabilities of the model are exercised to look at working fluid supply transients, short circuit transients, and piston ring leakage effects.

  13. Predicting The Exit Time Of Employees In An Organization Using Statistical Model

    Directory of Open Access Journals (Sweden)

    Ahmed Al Kuwaiti

    2015-08-01

    Full Text Available Employees are considered as an asset to any organization and each organization provide a better and flexible working environment to retain its best and resourceful workforce. As such continuous efforts are being taken to avoid or extend the exitwithdrawal of employees from the organization. Human resource managers are facing a challenge to predict the exit time of employees and there is no precise model existing at present in the literature. This study has been conducted to predict the probability of exit of an employee in an organization using appropriate statistical model. Accordingly authors designed a model using Additive Weibull distribution to predict the expected exit time of employee in an organization. In addition a Shock model approach is also executed to check how well the Additive Weibull distribution suits in an organization. The analytical results showed that when the inter-arrival time increases the expected time for the employees to exit also increases. This study concluded that Additive Weibull distribution can be considered as an alternative in the place of Shock model approach to predict the exit time of employee in an organization.

  14. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    Science.gov (United States)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  15. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  16. Comparison of prognostic models to predict the occurrence of colorectal cancer in asymptomatic individuals

    DEFF Research Database (Denmark)

    Smith, Todd; Muller, David C; Moons, Karel G M

    2018-01-01

    in the European Prospective Investigation into Cancer and Nutrition (EPIC) and the UK Biobank. The performance of the models to predict the occurrence of colorectal cancer within 5 or 10 years after study enrolment was assessed by discrimination (C-statistic) and calibration (plots of observed vs predicted......-based colorectal screening programmes. Future work should both evaluate this potential, through modelling and impact studies, and ascertain if further enhancement in their performance can be obtained....

  17. Increasing work-time influence: consequences for flexibility, variability, regularity and predictability.

    Science.gov (United States)

    Nabe-Nielsen, Kirsten; Garde, Anne Helene; Aust, Birgit; Diderichsen, Finn

    2012-01-01

    This quasi-experimental study investigated how an intervention aiming at increasing eldercare workers' influence on their working hours affected the flexibility, variability, regularity and predictability of the working hours. We used baseline (n = 296) and follow-up (n = 274) questionnaire data and interviews with intervention-group participants (n = 32). The work units in the intervention group designed their own intervention comprising either implementation of computerised self-scheduling (subgroup A), collection of information about the employees' work-time preferences by questionnaires (subgroup B), or discussion of working hours (subgroup C). Only computerised self-scheduling changed the working hours and the way they were planned. These changes implied more flexible but less regular working hours and an experience of less predictability and less continuity in the care of clients and in the co-operation with colleagues. In subgroup B and C, the participants ended up discussing the potential consequences of more work-time influence without actually implementing any changes. Employee work-time influence may buffer the adverse effects of shift work. However, our intervention study suggested that while increasing the individual flexibility, increasing work-time influence may also result in decreased regularity of the working hours and less continuity in the care of clients and co-operation with colleagues.

  18. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  19. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  20. Stability of a neural predictive controller scheme on a neural model

    DEFF Research Database (Denmark)

    Luther, Jim Benjamin; Sørensen, Paul Haase

    2009-01-01

    In previous works presenting various forms of neural-network-based predictive controllers, the main emphasis has been on the implementation aspects, i.e. the development of a robust optimization algorithm for the controller, which will be able to perform in real time. However, the stability issue....... The resulting controller is tested on a nonlinear pneumatic servo system.......In previous works presenting various forms of neural-network-based predictive controllers, the main emphasis has been on the implementation aspects, i.e. the development of a robust optimization algorithm for the controller, which will be able to perform in real time. However, the stability issue...... has not been addressed specifically for these controllers. On the other hand a number of results concerning the stability of receding horizon controllers on a nonlinear system exist. In this paper we present a proof of stability for a predictive controller controlling a neural network model...

  1. Developing and implementing the use of predictive models for estimating water quality at Great Lakes beaches

    Science.gov (United States)

    Francy, Donna S.; Brady, Amie M.G.; Carvin, Rebecca B.; Corsi, Steven R.; Fuller, Lori M.; Harrison, John H.; Hayhurst, Brett A.; Lant, Jeremiah; Nevers, Meredith B.; Terrio, Paul J.; Zimmerman, Tammy M.

    2013-01-01

    Predictive models have been used at beaches to improve the timeliness and accuracy of recreational water-quality assessments over the most common current approach to water-quality monitoring, which relies on culturing fecal-indicator bacteria such as Escherichia coli (E. coli.). Beach-specific predictive models use environmental and water-quality variables that are easily and quickly measured as surrogates to estimate concentrations of fecal-indicator bacteria or to provide the probability that a State recreational water-quality standard will be exceeded. When predictive models are used for beach closure or advisory decisions, they are referred to as “nowcasts.” During the recreational seasons of 2010-12, the U.S. Geological Survey (USGS), in cooperation with 23 local and State agencies, worked to improve existing nowcasts at 4 beaches, validate predictive models at another 38 beaches, and collect data for predictive-model development at 7 beaches throughout the Great Lakes. This report summarizes efforts to collect data and develop predictive models by multiple agencies and to compile existing information on the beaches and beach-monitoring programs into one comprehensive report. Local agencies measured E. coli concentrations and variables expected to affect E. coli concentrations such as wave height, turbidity, water temperature, and numbers of birds at the time of sampling. In addition to these field measurements, equipment was installed by the USGS or local agencies at or near several beaches to collect water-quality and metrological measurements in near real time, including nearshore buoys, weather stations, and tributary staff gages and monitors. The USGS worked with local agencies to retrieve data from existing sources either manually or by use of tools designed specifically to compile and process data for predictive-model development. Predictive models were developed by use of linear regression and (or) partial least squares techniques for 42 beaches

  2. Noninvasive work of breathing improves prediction of post-extubation outcome.

    Science.gov (United States)

    Banner, Michael J; Euliano, Neil R; Martin, A Daniel; Al-Rawas, Nawar; Layon, A Joseph; Gabrielli, Andrea

    2012-02-01

    We hypothesized that non-invasively determined work of breathing per minute (WOB(N)/min) (esophageal balloon not required) may be useful for predicting extubation outcome, i.e., appropriate work of breathing values may be associated with extubation success, while inappropriately increased values may be associated with failure. Adult candidates for extubation were divided into a training set (n = 38) to determine threshold values of indices for assessing extubation and a prospective validation set (n = 59) to determine the predictive power of the threshold values for patients successfully extubated and those who failed extubation. All were evaluated for extubation during a spontaneous breathing trial (5 cmH(2)O pressure support ventilation, 5 cmH(2)O positive end expiratory pressure) using routine clinical practice standards. WOB(N)/min data were blinded to attending physicians. Area under the receiver operating characteristic curves (AUC), sensitivity, specificity, and positive and negative predictive values of all extubation indices were determined. AUC for WOB(N)/min was 0.96 and significantly greater (p indices. WOB(N)/min had a specificity of 0.83, the highest sensitivity at 0.96, positive predictive value at 0.84, and negative predictive value at 0.96 compared to all indices. For 95% of those successfully extubated, WOB(N)/min was ≤10 J/min. WOB(N)/min had the greatest overall predictive accuracy for extubation compared to traditional indices. WOB(N)/min warrants consideration for use in a complementary manner with spontaneous breathing pattern data for predicting extubation outcome.

  3. Recurrent and Dynamic Models for Predicting Streaming Video Quality of Experience.

    Science.gov (United States)

    Bampis, Christos G; Li, Zhi; Katsavounidis, Ioannis; Bovik, Alan C

    2018-07-01

    Streaming video services represent a very large fraction of global bandwidth consumption. Due to the exploding demands of mobile video streaming services, coupled with limited bandwidth availability, video streams are often transmitted through unreliable, low-bandwidth networks. This unavoidably leads to two types of major streaming-related impairments: compression artifacts and/or rebuffering events. In streaming video applications, the end-user is a human observer; hence being able to predict the subjective Quality of Experience (QoE) associated with streamed videos could lead to the creation of perceptually optimized resource allocation strategies driving higher quality video streaming services. We propose a variety of recurrent dynamic neural networks that conduct continuous-time subjective QoE prediction. By formulating the problem as one of time-series forecasting, we train a variety of recurrent neural networks and non-linear autoregressive models to predict QoE using several recently developed subjective QoE databases. These models combine multiple, diverse neural network inputs, such as predicted video quality scores, rebuffering measurements, and data related to memory and its effects on human behavioral responses, using them to predict QoE on video streams impaired by both compression artifacts and rebuffering events. Instead of finding a single time-series prediction model, we propose and evaluate ways of aggregating different models into a forecasting ensemble that delivers improved results with reduced forecasting variance. We also deploy appropriate new evaluation metrics for comparing time-series predictions in streaming applications. Our experimental results demonstrate improved prediction performance that approaches human performance. An implementation of this work can be found at https://github.com/christosbampis/NARX_QoE_release.

  4. Modeling and prediction of human word search behavior in interactive machine translation

    Science.gov (United States)

    Ji, Duo; Yu, Bai; Ma, Bin; Ye, Na

    2017-12-01

    As a kind of computer aided translation method, Interactive Machine Translation technology reduced manual translation repetitive and mechanical operation through a variety of methods, so as to get the translation efficiency, and played an important role in the practical application of the translation work. In this paper, we regarded the behavior of users' frequently searching for words in the translation process as the research object, and transformed the behavior to the translation selection problem under the current translation. The paper presented a prediction model, which is a comprehensive utilization of alignment model, translation model and language model of the searching words behavior. It achieved a highly accurate prediction of searching words behavior, and reduced the switching of mouse and keyboard operations in the users' translation process.

  5. Violence at Work Predicts Health-Related Absence from the Labor Market. A Follow-up Study

    DEFF Research Database (Denmark)

    Friis, Karina; Lasgaard, Mathias Kamp

    Background and Aims: Exposure to workplace violence is one of the most serious threats to employee safety. Even so, only few longitudinal studies have investigated whether workplace violence increases the risk of health-related absence from work. The aim of the present study was to examine whether...... physical violence at work increases the risk of health-related absence from work and is associated with a greater risk of health-related absence from work in certain subgroups defined by gender, age, and educational level. Method: The study draws on data from a health and morbidity survey from 2006 merged...... with register data for the period from 2006 to 2015 (n = 14,250). Logistic regression models were used to examine violence at work as a predictor of health-related absence from work. Results: Workplace violence predicted health-related absence from work several years after the assault. In the 10-year follow...

  6. Research on the Prediction Model of CPU Utilization Based on ARIMA-BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wang Jina

    2016-01-01

    Full Text Available The dynamic deployment technology of the virtual machine is one of the current cloud computing research focuses. The traditional methods mainly work after the degradation of the service performance that usually lag. To solve the problem a new prediction model based on the CPU utilization is constructed in this paper. A reference offered by the new prediction model of the CPU utilization is provided to the VM dynamic deployment process which will speed to finish the deployment process before the degradation of the service performance. By this method it not only ensure the quality of services but also improve the server performance and resource utilization. The new prediction method of the CPU utilization based on the ARIMA-BP neural network mainly include four parts: preprocess the collected data, build the predictive model of ARIMA-BP neural network, modify the nonlinear residuals of the time series by the BP prediction algorithm and obtain the prediction results by analyzing the above data comprehensively.

  7. Can you please put it out? Predicting non-smokers' assertiveness intentions at work.

    Science.gov (United States)

    Aspropoulos, Eleftherios; Lazuras, Lambros; Rodafinos, Angelos; Eiser, J Richard

    2010-04-01

    The present study aimed to identify the psychosocial predictors of non-smoker employee intentions to ask smokers not to smoke at work. The predictive effects of past behaviour, anticipated regret, social norms, attitudinal, outcome expectancy and behavioural control beliefs were investigated in relation to the Attitudes-Social influence-self-Efficacy (ASE) model. Data were collected from Greek non-smoker employees (n=137, mean age=33.5, SD=10.5, 54.7% female) in 15 companies. The main outcome measure was assertiveness intention. Data on participants' past smoking, age, gender and on current smoking policy in the company were also collected. The majority of employees (77.4%) reported being annoyed by exposure to passive smoking at work, but only 37% reported having asked a smoker colleague not to smoke in the last 30 days. Regression analysis showed that the strongest predictor of non-smokers' assertiveness intentions was how often they believed that other non-smokers were assertive. Perceived control over being assertive, annoyance with secondhand smoke (SHS) exposure at work and past assertive behaviour also significantly predicted assertiveness intentions. Assertiveness by non-smoker employees seems to be guided mainly by normative and behavioural control beliefs, annoyance with SHS exposure at work, and past behaviour. Interventions to promote assertiveness in non-smokers might benefit from efficacy training combined with conveying the messages that the majority of other non-smokers are frequently annoyed by exposure to SHS, and that nearly half of all non-smokers are assertive towards smokers.

  8. Children's Verbal Working Memory: Role of Processing Complexity in Predicting Spoken Sentence Comprehension

    Science.gov (United States)

    Magimairaj, Beula M.; Montgomery, James W.

    2012-01-01

    Purpose: This study investigated the role of processing complexity of verbal working memory tasks in predicting spoken sentence comprehension in typically developing children. Of interest was whether simple and more complex working memory tasks have similar or different power in predicting sentence comprehension. Method: Sixty-five children (6- to…

  9. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  10. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  11. The Identification of a Threshold of Long Work Hours for Predicting Elevated Risks of Adverse Health Outcomes.

    Science.gov (United States)

    Conway, Sadie H; Pompeii, Lisa A; Gimeno Ruiz de Porras, David; Follis, Jack L; Roberts, Robert E

    2017-07-15

    Working long hours has been associated with adverse health outcomes. However, a definition of long work hours relative to adverse health risk has not been established. Repeated measures of work hours among approximately 2,000 participants from the Panel Study of Income Dynamics (1986-2011), conducted in the United States, were retrospectively analyzed to derive statistically optimized cutpoints of long work hours that best predicted three health outcomes. Work-hours cutpoints were assessed for model fit, calibration, and discrimination separately for the outcomes of poor self-reported general health, incident cardiovascular disease, and incident cancer. For each outcome, the work-hours threshold that best predicted increased risk was 52 hours per week or more for a minimum of 10 years. Workers exposed at this level had a higher risk of poor self-reported general health (relative risk (RR) = 1.28; 95% confidence interval (CI): 1.06, 1.53), cardiovascular disease (RR = 1.42; 95% CI: 1.24, 1.63), and cancer (RR = 1.62; 95% CI: 1.22, 2.17) compared with those working 35-51 hours per week for the same duration. This study provides the first health risk-based definition of long work hours. Further examination of the predictive power of this cutpoint on other health outcomes and in other study populations is needed. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  13. Performance of Reynolds Averaged Navier-Stokes Models in Predicting Separated Flows: Study of the Hump Flow Model Problem

    Science.gov (United States)

    Cappelli, Daniele; Mansour, Nagi N.

    2012-01-01

    Separation can be seen in most aerodynamic flows, but accurate prediction of separated flows is still a challenging problem for computational fluid dynamics (CFD) tools. The behavior of several Reynolds Averaged Navier-Stokes (RANS) models in predicting the separated ow over a wall-mounted hump is studied. The strengths and weaknesses of the most popular RANS models (Spalart-Allmaras, k-epsilon, k-omega, k-omega-SST) are evaluated using the open source software OpenFOAM. The hump ow modeled in this work has been documented in the 2004 CFD Validation Workshop on Synthetic Jets and Turbulent Separation Control. Only the baseline case is treated; the slot flow control cases are not considered in this paper. Particular attention is given to predicting the size of the recirculation bubble, the position of the reattachment point, and the velocity profiles downstream of the hump.

  14. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs.

    Directory of Open Access Journals (Sweden)

    Qifan Kuang

    Full Text Available Early and accurate identification of adverse drug reactions (ADRs is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs.In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper.Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  15. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  16. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  17. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  18. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  19. An Extended Assessment of Fluid Flow Models for the Prediction of Two-Dimensional Steady-State Airfoil Aerodynamics

    Directory of Open Access Journals (Sweden)

    José F. Herbert-Acero

    2015-01-01

    Full Text Available This work presents the analysis, application, and comparison of thirteen fluid flow models in the prediction of two-dimensional airfoil aerodynamics, considering laminar and turbulent subsonic inflow conditions. Diverse sensitivity analyses of different free parameters (e.g., the domain topology and its discretization, the flow model, and the solution method together with its convergence mechanisms revealed important effects on the simulations’ outcomes. The NACA 4412 airfoil was considered throughout the work and the computational predictions were compared with experiments conducted under a wide range of Reynolds numbers (7e5≤Re≤9e6 and angles-of-attack (-10°≤α≤20°. Improvements both in modeling accuracy and processing time were achieved by considering the RS LP-S and the Transition SST turbulence models, and by considering finite volume-based solution methods with preconditioned systems, respectively. The RS LP-S model provided the best lift force predictions due to the adequate modeling of the micro and macro anisotropic turbulence at the airfoil’s surface and at the nearby flow field, which in turn allowed the adequate prediction of stall conditions. The Transition-SST model provided the best drag force predictions due to adequate modeling of the laminar-to-turbulent flow transition and the surface shear stresses. Conclusions, recommendations, and a comprehensive research agenda are presented based on validated computational results.

  20. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  1. Structural maturation and brain activity predict future working memory capacity during childhood development.

    Science.gov (United States)

    Ullman, Henrik; Almeida, Rita; Klingberg, Torkel

    2014-01-29

    Human working memory capacity develops during childhood and is a strong predictor of future academic performance, in particular, achievements in mathematics and reading. Predicting working memory development is important for the early identification of children at risk for poor cognitive and academic development. Here we show that structural and functional magnetic resonance imaging data explain variance in children's working memory capacity 2 years later, which was unique variance in addition to that predicted using cognitive tests. While current working memory capacity correlated with frontoparietal cortical activity, the future capacity could be inferred from structure and activity in basal ganglia and thalamus. This gives a novel insight into the neural mechanisms of childhood development and supports the idea that neuroimaging can have a unique role in predicting children's cognitive development.

  2. Addressing issues associated with evaluating prediction models for survival endpoints based on the concordance statistic.

    Science.gov (United States)

    Wang, Ming; Long, Qi

    2016-09-01

    Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.

  3. Conceptual Software Reliability Prediction Models for Nuclear Power Plant Safety Systems

    International Nuclear Information System (INIS)

    Johnson, G.; Lawrence, D.; Yu, H.

    2000-01-01

    of the individual hardware/software components. Existing modeling techniques--such as fault tree analyses or reliability block diagrams--can probably be adapted to bridge the gaps between the reliability of the hardware components, the individual software elements, and the overall digital system. This project builds upon previous work to survey and rank potential measurement methods which could be used to measure software product reliability 3. This survey and ranking identified candidate measures for use in predicting the reliability of digital computer-based control and protection systems for nuclear power plants. Additionally, information gleaned from the study can be used to supplement existing review methods during an assessment of software-based digital systems

  4. Theoretical model for cavitation erosion prediction in centrifugal pump impeller

    International Nuclear Information System (INIS)

    Rayan, M.A.; Mahgob, M.M.; Mostafa, N.H.

    1990-01-01

    Cavitation is known to have great effects on pump hydraulic and mechanical characteristics. These effects are mainly described by deviation in pump performance, increasing vibration and noise level as well as erosion of blade and casing materials. In the present work, only the hydrodynamic aspect of cavitation was considered. The efforts were directed toward the study of cavitation inception, cavity mechanics and material erosion in order to clarify the macrohydrodynamic aspects of cavitation erosive wear in real machines. As a result of this study, it was found that cavitation damage can be predicted from model data. The obtained theoretical results show good agreement with the experimental results obtained in this investigation and with results of some other investigations. The application of the findings of this work will help the design engineer in predicting the erosion rate, according to the different operating conditions. (author)

  5. Development and validation of a prediction model for long-term sickness absence based on occupational health survey variables.

    Science.gov (United States)

    Roelen, Corné; Thorsen, Sannie; Heymans, Martijn; Twisk, Jos; Bültmann, Ute; Bjørner, Jakob

    2018-01-01

    The purpose of this study is to develop and validate a prediction model for identifying employees at increased risk of long-term sickness absence (LTSA), by using variables commonly measured in occupational health surveys. Based on the literature, 15 predictor variables were retrieved from the DAnish National working Environment Survey (DANES) and included in a model predicting incident LTSA (≥4 consecutive weeks) during 1-year follow-up in a sample of 4000 DANES participants. The 15-predictor model was reduced by backward stepwise statistical techniques and then validated in a sample of 2524 DANES participants, not included in the development sample. Identification of employees at increased LTSA risk was investigated by receiver operating characteristic (ROC) analysis; the area-under-the-ROC-curve (AUC) reflected discrimination between employees with and without LTSA during follow-up. The 15-predictor model was reduced to a 9-predictor model including age, gender, education, self-rated health, mental health, prior LTSA, work ability, emotional job demands, and recognition by the management. Discrimination by the 9-predictor model was significant (AUC = 0.68; 95% CI 0.61-0.76), but not practically useful. A prediction model based on occupational health survey variables identified employees with an increased LTSA risk, but should be further developed into a practically useful tool to predict the risk of LTSA in the general working population. Implications for rehabilitation Long-term sickness absence risk predictions would enable healthcare providers to refer high-risk employees to rehabilitation programs aimed at preventing or reducing work disability. A prediction model based on health survey variables discriminates between employees at high and low risk of long-term sickness absence, but discrimination was not practically useful. Health survey variables provide insufficient information to determine long-term sickness absence risk profiles. There is a need for

  6. Real-time prediction of respiratory motion based on a local dynamic model in an augmented space.

    Science.gov (United States)

    Hong, S-M; Jung, B-H; Ruan, D

    2011-03-21

    Motion-adaptive radiotherapy aims to deliver ablative radiation dose to the tumor target with minimal normal tissue exposure, by accounting for real-time target movement. In practice, prediction is usually necessary to compensate for system latency induced by measurement, communication and control. This work focuses on predicting respiratory motion, which is most dominant for thoracic and abdominal tumors. We develop and investigate the use of a local dynamic model in an augmented space, motivated by the observation that respiratory movement exhibits a locally circular pattern in a plane augmented with a delayed axis. By including the angular velocity as part of the system state, the proposed dynamic model effectively captures the natural evolution of respiratory motion. The first-order extended Kalman filter is used to propagate and update the state estimate. The target location is predicted by evaluating the local dynamic model equations at the required prediction length. This method is complementary to existing work in that (1) the local circular motion model characterizes 'turning', overcoming the limitation of linear motion models; (2) it uses a natural state representation including the local angular velocity and updates the state estimate systematically, offering explicit physical interpretations; (3) it relies on a parametric model and is much less data-satiate than the typical adaptive semiparametric or nonparametric method. We tested the performance of the proposed method with ten RPM traces, using the normalized root mean squared difference between the predicted value and the retrospective observation as the error metric. Its performance was compared with predictors based on the linear model, the interacting multiple linear models and the kernel density estimator for various combinations of prediction lengths and observation rates. The local dynamic model based approach provides the best performance for short to medium prediction lengths under relatively

  7. Support for the Logical Execution Time Model on a Time-predictable Multicore Processor

    DEFF Research Database (Denmark)

    Kluge, Florian; Schoeberl, Martin; Ungerer, Theo

    2016-01-01

    The logical execution time (LET) model increases the compositionality of real-time task sets. Removal or addition of tasks does not influence the communication behavior of other tasks. In this work, we extend a multicore operating system running on a time-predictable multicore processor to support...... the LET model. For communication between tasks we use message passing on a time-predictable network-on-chip to avoid the bottleneck of shared memory. We report our experiences and present results on the costs in terms of memory and execution time....

  8. Comparison of the Predictive Performance and Interpretability of Random Forest and Linear Models on Benchmark Data Sets.

    Science.gov (United States)

    Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan

    2017-08-28

    The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical

  9. High Working Memory Capacity Predicts Less Retrieval Induced Forgetting

    NARCIS (Netherlands)

    Mall, Jonathan T.; Morey, Candice C.

    2013-01-01

    Background : Working Memory Capacity (WMC) is thought to be related to executive control and focused memory search abilities. These two hypotheses make contrasting predictions regarding the effects of retrieval on forgetting. Executive control during memory retrieval is believed to lead to retrieval

  10. Reproducing tailing in breakthrough curves: Are statistical models equally representative and predictive?

    Science.gov (United States)

    Pedretti, Daniele; Bianchi, Marco

    2018-03-01

    Breakthrough curves (BTCs) observed during tracer tests in highly heterogeneous aquifers display strong tailing. Power laws are popular models for both the empirical fitting of these curves, and the prediction of transport using upscaling models based on best-fitted estimated parameters (e.g. the power law slope or exponent). The predictive capacity of power law based upscaling models can be however questioned due to the difficulties to link model parameters with the aquifers' physical properties. This work analyzes two aspects that can limit the use of power laws as effective predictive tools: (a) the implication of statistical subsampling, which often renders power laws undistinguishable from other heavily tailed distributions, such as the logarithmic (LOG); (b) the difficulties to reconcile fitting parameters obtained from models with different formulations, such as the presence of a late-time cutoff in the power law model. Two rigorous and systematic stochastic analyses, one based on benchmark distributions and the other on BTCs obtained from transport simulations, are considered. It is found that a power law model without cutoff (PL) results in best-fitted exponents (αPL) falling in the range of typical experimental values reported in the literature (1.5 tailing becomes heavier. Strong fluctuations occur when the number of samples is limited, due to the effects of subsampling. On the other hand, when the power law model embeds a cutoff (PLCO), the best-fitted exponent (αCO) is insensitive to the degree of tailing and to the effects of subsampling and tends to a constant αCO ≈ 1. In the PLCO model, the cutoff rate (λ) is the parameter that fully reproduces the persistence of the tailing and is shown to be inversely correlated to the LOG scale parameter (i.e. with the skewness of the distribution). The theoretical results are consistent with the fitting analysis of a tracer test performed during the MADE-5 experiment. It is shown that a simple

  11. The Conservation of Resources Model Applied to Work-Family Conflict and Strain.

    Science.gov (United States)

    Grandey, Alicia A.; Cropanzano, Russell

    1999-01-01

    Using time-lagged research design and path analysis, findings from 132 college faculty supported the conservation of resources model, which predicts that, as chronic work and family stressors drain resources, dissatisfaction and life distress increase and health declines. Self-esteem was not a moderating variable. (SK)

  12. Do Work Characteristics Predict Health Deterioration Among Employees with Chronic Diseases?

    NARCIS (Netherlands)

    Wind, A. de; Boot, C.R.L.; Sewdas, R.; Scharn, M.; Heuvel, S.G. van den; Beek, A.J. van der

    2017-01-01

    Purpose In our ageing workforce, the increasing numbers of employees with chronic diseases are encouraged to prolong their working lives. It is important to prevent health deterioration in this vulnerable group. This study aims to investigate whether work characteristics predict health deterioration

  13. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  14. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Economic decision making and the application of nonparametric prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2008-01-01

    Sustained increases in energy prices have focused attention on gas resources in low-permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are often large. Planning and development decisions for extraction of such resources must be areawide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm, the decision to enter such plays depends on reconnaissance-level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional-scale cost functions. The context of the worked example is the Devonian Antrim-shale gas play in the Michigan basin. One finding relates to selection of the resource prediction model to be used with economic models. Models chosen because they can best predict aggregate volume over larger areas (many hundreds of sites) smooth out granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined arbitrarily by extraneous factors. The analysis shows a 15-20% gain in gas volume when these simple models are applied to order drilling prospects strategically rather than to choose drilling locations randomly. Copyright ?? 2008 Society of Petroleum Engineers.

  16. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  17. Construction Worker Fatigue Prediction Model Based on System Dynamic

    OpenAIRE

    Wahyu Adi Tri Joko; Ayu Ratnawinanda Lila

    2017-01-01

    Construction accident can be caused by internal and external factors such as worker fatigue and unsafe project environment. Tight schedule of construction project forcing construction worker to work overtime in long period. This situation leads to worker fatigue. This paper proposes a model to predict construction worker fatigue based on system dynamic (SD). System dynamic is used to represent correlation among internal and external factors and to simulate level of worker fatigue. To validate...

  18. Mathematical models to predict rheological parameters of lateritic hydromixtures

    Directory of Open Access Journals (Sweden)

    Gabriel Hernández-Ramírez

    2017-10-01

    Full Text Available The present work had as objective to establish mathematical models that allow the prognosis of the rheological parameters of the lateritic pulp at concentrations of solids from 35% to 48%, temperature of the preheated hydromixture superior to 82 ° C and number of mineral between 3 and 16. Four samples of lateritic pulp were used in the study at different process locations. The results allowed defining that the plastic properties of the lateritic pulp in the conditions of this study conform to the Herschel-Bulkley model for real plastics. In addition, they show that for current operating conditions, even for new situations, UPD mathematical models have a greater ability to predict rheological parameters than least squares mathematical models.

  19. Predicting College Women's Career Plans: Instrumentality, Work, and Family

    Science.gov (United States)

    Savela, Alexandra E.; O'Brien, Karen M.

    2016-01-01

    This study examined how college women's instrumentality and expectations about combining work and family predicted early career development variables. Specifically, 177 undergraduate women completed measures of instrumentality (i.e., traits such as ambition, assertiveness, and risk taking), willingness to compromise career for family, anticipated…

  20. Development of a modified equilibrium model for biomass pilot-scale fluidized bed gasifier performance predictions

    International Nuclear Information System (INIS)

    Rodriguez-Alejandro, David A.; Nam, Hyungseok; Maglinao, Amado L.; Capareda, Sergio C.; Aguilera-Alvarado, Alberto F.

    2016-01-01

    The objective of this work is to develop a thermodynamic model considering non-stoichiometric restrictions. The model validation was done from experimental works using a bench-scale fluidized bed gasifier with wood chips, dairy manure, and sorghum. The model was used for a further parametric study to predict the performance of a pilot-scale fluidized biomass gasifier. The Gibbs free energy minimization was applied to the modified equilibrium model considering a heat loss to the surroundings, carbon efficiency, and two non-equilibrium factors based on empirical correlations of ER and gasification temperature. The model was in a good agreement with RMS <4 for the produced gas. The parametric study ranges were 0.01 < ER < 0.99 and 500 °C < T < 900 °C to predict syngas concentrations and its LHV (lower heating value) for the optimization. Higher aromatics in tar were contained in WC gasification compared to manure gasification. A wood gasification tar simulation was produced to predict the amount of tars at specific conditions. The operating conditions for the highest quality syngas were reconciled experimentally with three biomass wastes using a fluidized bed gasifier. The thermodynamic model was used to predict the gasification performance at conditions beyond the actual operation. - Highlights: • Syngas from experimental gasification was used to create a non-equilibrium model. • Different types of biomass (HTS, DM, and WC) were used for gasification modelling. • Different tar compositions were identified with a simulation of tar yields. • The optimum operating conditions were found through the developed model.

  1. Job demands-resources predicting burnout and work engagement among Belgian home health care nurses: A cross-sectional study.

    Science.gov (United States)

    Vander Elst, Tinne; Cavents, Carolien; Daneels, Katrien; Johannik, Kristien; Baillien, Elfi; Van den Broeck, Anja; Godderis, Lode

    A better knowledge of the job aspects that may predict home health care nurses' burnout and work engagement is important in view of stress prevention and health promotion. The Job Demands-Resources model predicts that job demands and resources relate to burnout and work engagement but has not previously been tested in the specific context of home health care nursing. The present study offers a comprehensive test of the Job-Demands Resources model in home health care nursing. We investigate the main and interaction effects of distinctive job demands (workload, emotional demands and aggression) and resources (autonomy, social support and learning opportunities) on burnout and work engagement. Analyses were conducted using cross-sectional data from 675 Belgian home health care nurses, who participated in a voluntary and anonymous survey. The results show that workload and emotional demands were positively associated with burnout, whereas aggression was unrelated to burnout. All job resources were associated with higher levels of work engagement and lower levels of burnout. In addition, social support buffered the positive relationship between workload and burnout. Home health care organizations should invest in dealing with workload and emotional demands and stimulating the job resources under study to reduce the risk of burnout and increase their nurses' work engagement. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Prediction of the working parameters of a wood waste gasifier through an equilibrium model

    Energy Technology Data Exchange (ETDEWEB)

    Altafini, Carlos R.; Baretto, Ronaldo M. [Caxias do Sul Univ., Dept. of Mechanical Engineering, Caxias do Sul, RS (Brazil); Wander, Paulo R. [Caxias do Sul Univ., Dept. of Mechanical Engineering, Caxias do Sul, RS (Brazil); Federal Univ. of Rio Grande do Sul State (UFRGS), Mechanical Engineering Postgraduation Program (PROMEC), RS (Brazil)

    2003-10-01

    This paper deals with the computational simulation of a wood waste (sawdust) gasifier using an equilibrium model based on minimization of the Gibbs free energy. The gasifier has been tested with Pinus Elliotis sawdust, an exotic specie largely cultivated in the South of Brazil. The biomass used in the tests presented a moisture of nearly 10% (wt% on wet basis), and the average composition results of the gas produced (without tar) are compared with the equilibrium models used. Sensitivity studies to verify the influence of the moisture sawdust content on the fuel gas composition and on its heating value were made. More complex models to reproduce with better accuracy the gasifier studied were elaborated. Although the equilibrium models do not represent the reactions that occur at relatively high temperatures ( {approx_equal} 800 deg C) very well, these models can be useful to show some tendencies on the working parameter variations of a gasifier. (Author)

  3. Phototherapy of the newborn: a predictive model for the outcome.

    Science.gov (United States)

    Ossamu Osaku, Nelson; Silverio Lopes, Heitor

    2005-01-01

    Jaundice in one of the most common problems of the newborn. In most cases, jaundice is considered a physiological transient situation, but sometimes it can lead to death or serious injuries for the survivors. For decades, phototherapy has been used as the main method for prevention and treatment of hyperbilirubinaemia of the newborn. This work aims at finding a predictive model for the decrement of blood bilirubin followed conventional phototherapy. Data from 90 patients were collected and used in the multiple regression method. A rigorous statistical analysis was done in order to guarantee a correct and valid model. The obtained model was able to explain 78% of the variation of the dependent variable We found that it is possible to predict the total sugar bilirubin of the patient under phototherapy by knowing its birth weight, bilirubin level at the beginning of treatment, duration of exposition, and irradiance. Besides, it is possible to infer the time necessary for a given decrement of bilirubin, under approximately constant irradiance.

  4. TACD: a transportable ant colony discrimination model for corporate bankruptcy prediction

    Science.gov (United States)

    Lalbakhsh, Pooia; Chen, Yi-Ping Phoebe

    2017-05-01

    This paper presents a transportable ant colony discrimination strategy (TACD) to predict corporate bankruptcy, a topic of vital importance that is attracting increasing interest in the field of economics. The proposed algorithm uses financial ratios to build a binary prediction model for companies with the two statuses of bankrupt and non-bankrupt. The algorithm takes advantage of an improved version of continuous ant colony optimisation (CACO) at the core, which is used to create an accurate, simple and understandable linear model for discrimination. This also enables the algorithm to work with continuous values, leading to more efficient learning and adaption by avoiding data discretisation. We conduct a comprehensive performance evaluation on three real-world data sets under a stratified cross-validation strategy. In three different scenarios, TACD is compared with 11 other bankruptcy prediction strategies. We also discuss the efficiency of the attribute selection methods used in the experiments. In addition to its simplicity and understandability, statistical significance tests prove the efficiency of TACD against the other prediction algorithms in both measures of AUC and accuracy.

  5. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  6. A Critical Plane-energy Model for Multiaxial Fatigue Life Prediction of Homogeneous and Heterogeneous Materials

    Science.gov (United States)

    Wei, Haoyang

    A new critical plane-energy model is proposed in this thesis for multiaxial fatigue life prediction of homogeneous and heterogeneous materials. Brief review of existing methods, especially on the critical plane-based and energy-based methods, are given first. Special focus is on one critical plane approach which has been shown to work for both brittle and ductile metals. The key idea is to automatically change the critical plane orientation with respect to different materials and stress states. One potential drawback of the developed model is that it needs an empirical calibration parameter for non-proportional multiaxial loadings since only the strain terms are used and the out-of-phase hardening cannot be considered. The energy-based model using the critical plane concept is proposed with help of the Mroz-Garud hardening rule to explicitly include the effect of non-proportional hardening under fatigue cyclic loadings. Thus, the empirical calibration for non-proportional loading is not needed since the out-of-phase hardening is naturally included in the stress calculation. The model predictions are compared with experimental data from open literature and it is shown the proposed model can work for both proportional and non-proportional loadings without the empirical calibration. Next, the model is extended for the fatigue analysis of heterogeneous materials integrating with finite element method. Fatigue crack initiation of representative volume of heterogeneous materials is analyzed using the developed critical plane-energy model and special focus is on the microstructure effect on the multiaxial fatigue life predictions. Several conclusions and future work is drawn based on the proposed study.

  7. PREDICTIVE MODELS FOR SUPPORT OF INCIDENT MANAGEMENT PROCESS IN IT SERVICE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Martin SARNOVSKY

    2018-03-01

    Full Text Available ABSTRACT The work presented in this paper is focused on creating of predictive models that help in the process of incident resolution and implementation of IT infrastructure changes to increase the overall support of IT management. Our main objective was to build the predictive models using machine learning algorithms and CRISP-DM methodology. We used the incident and related changes database obtained from the IT environment of the Rabobank Group company, which contained information about the processing of the incidents during the incident management process. We decided to investigate the dependencies between the incident observation on particular infrastructure component and the actual source of the incident as well as the dependency between the incidents and related changes in the infrastructure. We used Random Forests and Gradient Boosting Machine classifiers in the process of identification of incident source as well as in the prediction of possible impact of the observed incident. Both types of models were tested on testing set and evaluated using defined metrics.

  8. [Study on the ARIMA model application to predict echinococcosis cases in China].

    Science.gov (United States)

    En-Li, Tan; Zheng-Feng, Wang; Wen-Ce, Zhou; Shi-Zhu, Li; Yan, Lu; Lin, Ai; Yu-Chun, Cai; Xue-Jiao, Teng; Shun-Xian, Zhang; Zhi-Sheng, Dang; Chun-Li, Yang; Jia-Xu, Chen; Wei, Hu; Xiao-Nong, Zhou; Li-Guang, Tian

    2018-02-26

    To predict the monthly reported echinococcosis cases in China with the autoregressive integrated moving average (ARIMA) model, so as to provide a reference for prevention and control of echinococcosis. SPSS 24.0 software was used to construct the ARIMA models based on the monthly reported echinococcosis cases of time series from 2007 to 2015 and 2007 to 2014, respectively, and the accuracies of the two ARIMA models were compared. The model based on the data of the monthly reported cases of echinococcosis in China from 2007 to 2015 was ARIMA (1, 0, 0) (1, 1, 0) 12 , the relative error among reported cases and predicted cases was -13.97%, AR (1) = 0.367 ( t = 3.816, P ARIMA (1, 0, 0) (1, 0, 1) 12 , the relative error among reported cases and predicted cases was 0.56%, AR (1) = 0.413 ( t = 4.244, P ARIMA models as for the same infectious diseases. It is needed to be further verified that the more data are accumulated, the shorter time of predication is, and the smaller the average of the relative error is. The establishment and prediction of an ARIMA model is a dynamic process that needs to be adjusted and optimized continuously according to the accumulated data, meantime, we should give full consideration to the intensity of the work related to infectious diseases reported (such as disease census and special investigation).

  9. Finding Furfural Hydrogenation Catalysts via Predictive Modelling.

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-09-10

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.

  10. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...

  11. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    Science.gov (United States)

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines

  12. ECONOMIC AND MATHEMATICAL MODEL OF PREDICTION OF DEVIATION IN MOSCOW SUBURBAN RAILWAY COMPLEX

    Directory of Open Access Journals (Sweden)

    Dmitry I. Valdman

    2013-01-01

    Full Text Available The article deals with the theoretical aspects of mathematical modeling and forecasting. Additionally, it describes a mathematical model for forecasting the number of incidents, depending on the number of different types of planned works with one and the same subject in service facilities, validation of the model via substituting of the data and comparing the predicted values calculated by the model and the actual values for the same periods.

  13. Comparison of simplified models in the prediction of two phase flow in pipelines

    Science.gov (United States)

    Jerez-Carrizales, M.; Jaramillo, J. E.; Fuentes, D.

    2014-06-01

    Prediction of two phase flow in pipelines is a common task in engineering. It is a complex phenomenon and many models have been developed to find an approximate solution to the problem. Some old models, such as the Hagedorn & Brown (HB) model, have been highlighted by many authors to give very good performance. Furthermore, many modifications have been applied to this method to improve its predictions. In this work two simplified models which are based on empiricism (HB and Mukherjee and Brill, MB) are considered. One mechanistic model which is based on the physics of the phenomenon (AN) and it still needs some correlations called closure relations is also used. Moreover, a drift flux model defined in steady state that is flow pattern dependent (HK model) is implemented. The implementation of these methods was tested using published data in the scientific literature for vertical upward flows. Furthermore, a comparison of the predictive performance of the four models is done against a well from Campo Escuela Colorado. Difference among four models is smaller than difference with experimental data from the well in Campo Escuela Colorado.

  14. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    Science.gov (United States)

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error model predicts the exposure of

  15. How Sensitive Are Transdermal Transport Predictions by Microscopic Stratum Corneum Models to Geometric and Transport Parameter Input?

    Science.gov (United States)

    Wen, Jessica; Koo, Soh Myoung; Lape, Nancy

    2018-02-01

    While predictive models of transdermal transport have the potential to reduce human and animal testing, microscopic stratum corneum (SC) model output is highly dependent on idealized SC geometry, transport pathway (transcellular vs. intercellular), and penetrant transport parameters (e.g., compound diffusivity in lipids). Most microscopic models are limited to a simple rectangular brick-and-mortar SC geometry and do not account for variability across delivery sites, hydration levels, and populations. In addition, these models rely on transport parameters obtained from pure theory, parameter fitting to match in vivo experiments, and time-intensive diffusion experiments for each compound. In this work, we develop a microscopic finite element model that allows us to probe model sensitivity to variations in geometry, transport pathway, and hydration level. Given the dearth of experimentally-validated transport data and the wide range in theoretically-predicted transport parameters, we examine the model's response to a variety of transport parameters reported in the literature. Results show that model predictions are strongly dependent on all aforementioned variations, resulting in order-of-magnitude differences in lag times and permeabilities for distinct structure, hydration, and parameter combinations. This work demonstrates that universally predictive models cannot fully succeed without employing experimentally verified transport parameters and individualized SC structures. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  16. Gene prediction using the Self-Organizing Map: automatic generation of multiple gene models.

    Science.gov (United States)

    Mahony, Shaun; McInerney, James O; Smith, Terry J; Golden, Aaron

    2004-03-05

    Many current gene prediction methods use only one model to represent protein-coding regions in a genome, and so are less likely to predict the location of genes that have an atypical sequence composition. It is likely that future improvements in gene finding will involve the development of methods that can adequately deal with intra-genomic compositional variation. This work explores a new approach to gene-prediction, based on the Self-Organizing Map, which has the ability to automatically identify multiple gene models within a genome. The current implementation, named RescueNet, uses relative synonymous codon usage as the indicator of protein-coding potential. While its raw accuracy rate can be less than other methods, RescueNet consistently identifies some genes that other methods do not, and should therefore be of interest to gene-prediction software developers and genome annotation teams alike. RescueNet is recommended for use in conjunction with, or as a complement to, other gene prediction methods.

  17. Factors predicting work outcome in Japanese patients with schizophrenia: role of multiple functioning levels.

    Science.gov (United States)

    Sumiyoshi, Chika; Harvey, Philip D; Takaki, Manabu; Okahisa, Yuko; Sato, Taku; Sora, Ichiro; Nuechterlein, Keith H; Subotnik, Kenneth L; Sumiyoshi, Tomiki

    2015-09-01

    Functional outcomes in individuals with schizophrenia suggest recovery of cognitive, everyday, and social functioning. Specifically improvement of work status is considered to be most important for their independent living and self-efficacy. The main purposes of the present study were 1) to identify which outcome factors predict occupational functioning, quantified as work hours, and 2) to provide cut-offs on the scales for those factors to attain better work status. Forty-five Japanese patients with schizophrenia and 111 healthy controls entered the study. Cognition, capacity for everyday activities, and social functioning were assessed by the Japanese versions of the MATRICS Cognitive Consensus Battery (MCCB), the UCSD Performance-based Skills Assessment-Brief (UPSA-B), and the Social Functioning Scale Individuals' version modified for the MATRICS-PASS (Modified SFS for PASS), respectively. Potential factors for work outcome were estimated by multiple linear regression analyses (predicting work hours directly) and a multiple logistic regression analyses (predicting dichotomized work status based on work hours). ROC curve analyses were performed to determine cut-off points for differentiating between the better- and poor work status. The results showed that a cognitive component, comprising visual/verbal learning and emotional management, and a social functioning component, comprising independent living and vocational functioning, were potential factors for predicting work hours/status. Cut-off points obtained in ROC analyses indicated that 60-70% achievements on the measures of those factors were expected to maintain the better work status. Our findings suggest that improvement on specific aspects of cognitive and social functioning are important for work outcome in patients with schizophrenia.

  18. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    Science.gov (United States)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  19. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  20. Predicting climate-induced range shifts: model differences and model reliability.

    Science.gov (United States)

    Joshua J. Lawler; Denis White; Ronald P. Neilson; Andrew R. Blaustein

    2006-01-01

    Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common data set, we investigated the potential implications of alternative modeling approaches for...

  1. The development of U. S. soil erosion prediction and modeling

    Directory of Open Access Journals (Sweden)

    John M. Laflen

    2013-09-01

    Full Text Available Soil erosion prediction technology began over 70 years ago when Austin Zingg published a relationship between soil erosion (by water and land slope and length, followed shortly by a relationship by Dwight Smith that expanded this equation to include conservation practices. But, it was nearly 20 years before this work's expansion resulted in the Universal Soil Loss Equation (USLE, perhaps the foremost achievement in soil erosion prediction in the last century. The USLE has increased in application and complexity, and its usefulness and limitations have led to the development of additional technologies and new science in soil erosion research and prediction. Main among these new technologies is the Water Erosion Prediction Project (WEPP model, which has helped to overcome many of the shortcomings of the USLE, and increased the scale over which erosion by water can be predicted. Areas of application of erosion prediction include almost all land types: urban, rural, cropland, forests, rangeland, and construction sites. Specialty applications of WEPP include prediction of radioactive material movement with soils at a superfund cleanup site, and near real-time daily estimation of soil erosion for the entire state of Iowa.

  2. Assessment of predictive dermal exposure to chemicals in the work environment

    Directory of Open Access Journals (Sweden)

    Agnieszka Jankowska

    2017-08-01

    Full Text Available Assessment of dermal exposure to chemicals in the work environment is problematic, mainly as a result of the lack of measurement data on occupational exposure to chemicals. Due to common prevalence of occupational skin exposure and its health consequences it is necessary to look for efficient solutions allowing for reliable exposure assessment. The aim of the study is to present predictive models used to assess non-measured dermal exposure, as well as to acquaint Polish users with the principles of the selected model functioning. This paper presents examples of models to assist the employer in the the assessment of occupational exposure associated with the skin contact with chemicals, developed in European Union (EU countries, as well as in countries outside the EU. Based on the literature data dermal exposure models EASE (Estimation and Assessment of Substance Exposure, COSHH Essentials (Control of Substances Hazardous to Health Regulations, DREAM (Dermal Exposure Assessment Method, Stoffenmanager , ECETOC TRA (European Centre for Ecotoxicology and Toxicology of Chemicals Targeted Risk Assessment, MEASE (Metal’s EASE, PHED (Pesticide Handlers Exposure Database, DERM (Dermal Exposure Ranking Method and RISKOFDERM (Risk Assessment of Occupational Dermal Exposure to Chemicals were briefly described. Moreover the characteristics of RISKOFDERM, guidelines for its use, information on input and output data were further detailed. Problem of full work shift dermal exposure assessment is described. An example of exposure assessment using RISKOFDERM and effectiveness evaluation to date were also presented. When no measurements are available, RISKOFDERM allows dermal exposure assessment and thus can improve the risk assessment quality and effectiveness of dermal risk management. Med Pr 2017;68(4:557–569

  3. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  4. Does trait affectivity predict work-to-family conflict and enrichment beyond job characteristics?

    Science.gov (United States)

    Tement, Sara; Korunka, Christian

    2013-01-01

    The present study examines whether negative and positive affectivity (NA and PA, respectively) predict different forms of work-to-family conflict (WFC-time, WFC-strain, WFC-behavior) and enrichment (WFE-development, WFE-affect, WFE-capital) beyond job characteristics (workload, autonomy, variety, workplace support). Furthermore, interactions between job characteristics and trait affectivity while predicting WFC and WFE were examined. Using a large sample of Slovenian employees (N = 738), NA and PA were found to explain variance in WFC as well as in WFE above and beyond job characteristics. More precisely, NA significantly predicted WFC, whereas PA significantly predicted WFE. In addition, several interactive effects were found to predict forms of WFC and WFE. These results highlight the importance of trait affectivity in work-family research. They provide further support for the crucial impact of job characteristics as well.

  5. A Bayesian Performance Prediction Model for Mathematics Education: A Prototypical Approach for Effective Group Composition

    Science.gov (United States)

    Bekele, Rahel; McPherson, Maggie

    2011-01-01

    This research work presents a Bayesian Performance Prediction Model that was created in order to determine the strength of personality traits in predicting the level of mathematics performance of high school students in Addis Ababa. It is an automated tool that can be used to collect information from students for the purpose of effective group…

  6. Model predictive Controller for Mobile Robot

    OpenAIRE

    Alireza Rezaee

    2017-01-01

    This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...

  7. Model Predictive Control of a Wave Energy Converter with Discrete Fluid Power Power Take-Off System

    DEFF Research Database (Denmark)

    Hansen, Anders Hedegaard; Asmussen, Magnus Færing; Bech, Michael Møller

    2018-01-01

    Wave power extraction algorithms for wave energy converters are normally designed without taking system losses into account leading to suboptimal power extraction. In the current work, a model predictive power extraction algorithm is designed for a discretized power take of system. It is shown how...... the quantized nature of a discrete fluid power system may be included in a new model predictive control algorithm leading to a significant increase in the harvested power. A detailed investigation of the influence of the prediction horizon and the time step is reported. Furthermore, it is shown how...

  8. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  9. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  10. Working Memory in Written Composition: An Evaluation of the 1996 Model

    Directory of Open Access Journals (Sweden)

    Ronald T. Kellogg, , , &

    2013-10-01

    Full Text Available A model of how working memory, as conceived by Baddeley (1986, supports the planning of ideas, translating ideas into written sentences, and reviewing the ideas and text already produced was proposed by Kellogg (1996. A progress report based on research from the past 17 years shows strong support for the core assumption that planning, translating, and reviewing are all dependent on the central executive. Similarly, the translation of ideas into a sentence does in fact require also verbal working memory, but the claim that editing makes no demands on the phonological loop is tenuous. As predicted by the model, planning also engages the visuo-spatial sketchpad. However, it turns out to do so only in planning with concrete concepts that elicit mental imagery. Abstract concepts do not require visuo-spatial resources, a point not anticipated by the original model. Moreover, it is unclear the extent to which planning involves spatial as opposed to visual working memory. Contrary to Baddeley’s original model, these are now known to be independent stores of working memory; the specific role of the spatial store in writing is uncertain based on the existing literature. The implications of this body of research for the instruction of writing are considered in the final section of the paper.

  11. When high working memory capacity is and is not beneficial for predicting nonlinear processes.

    Science.gov (United States)

    Fischer, Helen; Holt, Daniel V

    2017-04-01

    Predicting the development of dynamic processes is vital in many areas of life. Previous findings are inconclusive as to whether higher working memory capacity (WMC) is always associated with using more accurate prediction strategies, or whether higher WMC can also be associated with using overly complex strategies that do not improve accuracy. In this study, participants predicted a range of systematically varied nonlinear processes based on exponential functions where prediction accuracy could or could not be enhanced using well-calibrated rules. Results indicate that higher WMC participants seem to rely more on well-calibrated strategies, leading to more accurate predictions for processes with highly nonlinear trajectories in the prediction region. Predictions of lower WMC participants, in contrast, point toward an increased use of simple exemplar-based prediction strategies, which perform just as well as more complex strategies when the prediction region is approximately linear. These results imply that with respect to predicting dynamic processes, working memory capacity limits are not generally a strength or a weakness, but that this depends on the process to be predicted.

  12. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  13. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  14. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  15. Testing and intercomparison of model predictions of radionuclide migration from a hypothetical area source

    International Nuclear Information System (INIS)

    O'Brien, R.S.; Yu, C.; Zeevaert, T.; Olyslaegers, G.; Amado, V.; Setlow, L.W.; Waggitt, P.W.

    2008-01-01

    This work was carried out as part of the International Atomic Energy Agency's EMRAS program. One aim of the work was to develop scenarios for testing computer models designed for simulating radionuclide migration in the environment, and to use these scenarios for testing the models and comparing predictions from different models. This paper presents the results of the development and testing of a hypothetical area source of NORM waste/residue using two complex computer models and one screening model. There are significant differences in the methods used to model groundwater flow between the complex models. The hypothetical source was used because of its relative simplicity and because of difficulties encountered in finding comprehensive, well-validated data sets for real sites. The source consisted of a simple repository of uniform thickness, with 1 Bq g -1 of uranium-238 ( 238 U) (in secular equilibrium with its decay products) distributed uniformly throughout the waste. These approximate real situations, such as engineered repositories, waste rock piles, tailings piles and landfills. Specification of the site also included the physical layout, vertical stratigraphic details, soil type for each layer of material, precipitation and runoff details, groundwater flow parameters, and meteorological data. Calculations were carried out with and without a cover layer of clean soil above the waste, for people working and living at different locations relative to the waste. The predictions of the two complex models showed several differences which need more detailed examination. The scenario is available for testing by other modelers. It can also be used as a planning tool for remediation work or for repository design, by changing the scenario parameters and running the models for a range of different inputs. Further development will include applying models to real scenarios and integrating environmental impact assessment methods with the safety assessment tools currently

  16. Predicting change in symptoms of depression during the transition to university: the roles of BDNF and working memory capacity.

    Science.gov (United States)

    LeMoult, Joelle; Carver, Charles S; Johnson, Sheri L; Joormann, Jutta

    2015-03-01

    Studies on depression risk emphasize the importance of both cognitive and genetic vulnerability factors. The present study has provided the first examination of whether working memory capacity, the BDNF Val66Met polymorphism, and their interaction predict changes in symptoms of depression during the transition to university. Early in the semester, students completed a self-report measure of depressive symptoms and a modified version of the reading span task to assess working memory capacity in the presence of both neutral and negative distractors. Whole blood was genotyped for the BDNF Val66Met polymorphism. Students returned at the end of the semester to complete additional self-report questionnaires. Neither working memory capacity nor the BDNF Val66Met polymorphism predicted change in depressive symptoms either independently or in interaction with self-reported semester difficulty. The BDNF Val66Met polymorphism, however, moderated the association between working memory capacity and symptom change. Among met carriers, lower working memory capacity in the presence of negative-but not neutral-distractors was associated with increased symptoms of depression over the semester. For the val/val group, working memory capacity did not predict symptom change. These findings contribute directly to biological and cognitive models of depression and highlight the importance of examining Gene × Cognition interactions when investigating risk for depression.

  17. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  18. Do burnout and work engagement predict depressive symptoms and life satisfaction? A three-wave seven-year prospective study.

    Science.gov (United States)

    Hakanen, Jari J; Schaufeli, Wilmar B

    2012-12-10

    Burnout and work engagement have been viewed as opposite, yet distinct states of employee well-being. We investigated whether work-related indicators of well-being (i.e. burnout and work engagement) spill-over and generalize to context-free well-being (i.e. depressive symptoms and life satisfaction). More specifically, we examined the causal direction: does burnout/work engagement lead to depressive symptoms/life satisfaction, or the other way around? Three surveys were conducted. In 2003, 71% of all Finnish dentists were surveyed (n=3255), and the response rate of the 3-year follow-up was 84% (n=2555). The second follow-up was conducted four years later with a response rate of 86% (n=1964). Structural equation modeling was used to investigate the cross-lagged associations between the study variables across time. Burnout predicted depressive symptoms and life dissatisfaction from T1 to T2 and from T2 to T3. Conversely, work engagement had a negative effect on depressive symptoms and a positive effect on life satisfaction, both from T1 to T2 and from T2 to T3, even after adjusting for the impact of burnout at every occasion. The study was conducted among one occupational group, which limits its generalizability. Work-related well-being predicts general wellbeing in the long-term. For example, burnout predicts depressive symptoms and not vice versa. In addition, burnout and work engagement are not direct opposites. Instead, both have unique, incremental impacts on life satisfaction and depressive symptoms. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Factors predicting work outcome in Japanese patients with schizophrenia: role of multiple functioning levels

    Directory of Open Access Journals (Sweden)

    Chika Sumiyoshi

    2015-09-01

    Full Text Available Functional outcomes in individuals with schizophrenia suggest recovery of cognitive, everyday, and social functioning. Specifically improvement of work status is considered to be most important for their independent living and self-efficacy. The main purposes of the present study were 1 to identify which outcome factors predict occupational functioning, quantified as work hours, and 2 to provide cut-offs on the scales for those factors to attain better work status. Forty-five Japanese patients with schizophrenia and 111 healthy controls entered the study. Cognition, capacity for everyday activities, and social functioning were assessed by the Japanese versions of the MATRICS Cognitive Consensus Battery (MCCB, the UCSD Performance-based Skills Assessment-Brief (UPSA-B, and the Social Functioning Scale Individuals’ version modified for the MATRICS-PASS (Modified SFS for PASS, respectively. Potential factors for work outcome were estimated by multiple linear regression analyses (predicting work hours directly and a multiple logistic regression analyses (predicting dichotomized work status based on work hours. ROC curve analyses were performed to determine cut-off points for differentiating between the better- and poor work status. The results showed that a cognitive component, comprising visual/verbal learning and emotional management, and a social functioning component, comprising independent living and vocational functioning, were potential factors for predicting work hours/status. Cut-off points obtained in ROC analyses indicated that 60–70% achievements on the measures of those factors were expected to maintain the better work status. Our findings suggest that improvement on specific aspects of cognitive and social functioning are important for work outcome in patients with schizophrenia.

  20. Patient Similarity in Prediction Models Based on Health Data: A Scoping Review

    Science.gov (United States)

    Sharafoddini, Anis; Dubin, Joel A

    2017-01-01

    data, wavelet transform and term frequency-inverse document frequency methods were employed to extract predictors. Selecting predictors with potential to highlight special cases and defining new patient similarity metrics were among the gaps identified in the existing literature that provide starting points for future work. Patient status prediction models based on patient similarity and health data offer exciting potential for personalizing and ultimately improving health care, leading to better patient outcomes. PMID:28258046

  1. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  2. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  3. Longitudinal Associations between Maternal Work Stress, Negative Work-Family Spillover, and Depressive Symptoms.

    Science.gov (United States)

    Goodman, W Benjamin; Crouter, Ann C

    2009-07-01

    The current study examined associations over an 18-month period between maternal work stressors, negative work-family spillover, and depressive symptoms in a sample of 414 employed mothers with young children living in six predominantly nonmetropolitan counties in the Eastern United States. Results from a one-group mediation model revealed that a less flexible work environment and greater work pressure predicted higher levels of depressive symptoms, and further, that these associations were mediated by perceptions of negative work-family spillover. Additionally, results from a two-group mediation model suggested that work pressure predicted greater perceptions of spillover only for mothers employed full-time. Findings suggest the need for policies that reduce levels of work stress and help mothers manage their work and family responsibilities.

  4. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  5. Evaluation of Turbulence Models Through Predictions of a Simple 3D Boundary Layer.

    Science.gov (United States)

    Jammalamadaka, A.

    2005-11-01

    Although a number of popular turbulence models are now commonly used to predict complex 3D flows, in particular for industrial applications, very limited full evaluation of their performance has been carried out using thoroughly documented experiments. One such experiment is that of Bruns, Fernholz and Monkewitz (JFM, vol. 393; 1999) in a boundary layer on the wall of an S-shaped duct, where the wall shear stress was measured accurately and independently in the original work and more recently with oil-film interferometry by Reudi et al. (Exp Fluids vol. 35; 2003). Results from various models including k-ɛ, Spalart-Alamaras, k-φ, Menter's SST, and RSM are compared with the experimental results to extract better understanding of strengths and limitations of the various models. In addition to the various pressure distributions along the S-duct and the shear stress development on the test surface, the various normal stresses are compared for all the models with some surprising results in reference to the difficulty in predicting even such a simple 3D turbulent flow. Comparisons of other Reynolds stresses with models that predict them directly also reveal interesting results. In general the predictions of models are more in agreement with each other than with the experiment, suggesting that they suffer from common shortcomings. Also, the deviations of the predictions from the experiment grow to significant levels just beyond the development of the cross-over transverse velocity profile.

  6. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  7. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  8. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  9. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  10. Model Predictive Control for Linear Complementarity and Extended Linear Complementarity Systems

    Directory of Open Access Journals (Sweden)

    Bambang Riyanto

    2005-11-01

    Full Text Available In this paper, we propose model predictive control method for linear complementarity and extended linear complementarity systems by formulating optimization along prediction horizon as mixed integer quadratic program. Such systems contain interaction between continuous dynamics and discrete event systems, and therefore, can be categorized as hybrid systems. As linear complementarity and extended linear complementarity systems finds applications in different research areas, such as impact mechanical systems, traffic control and process control, this work will contribute to the development of control design method for those areas as well, as shown by three given examples.

  11. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  12. Stock management in hospital pharmacy using chance-constrained model predictive control.

    Science.gov (United States)

    Jurado, I; Maestre, J M; Velarde, P; Ocampo-Martinez, C; Fernández, I; Tejera, B Isla; Prado, J R Del

    2016-05-01

    One of the most important problems in the pharmacy department of a hospital is stock management. The clinical need for drugs must be satisfied with limited work labor while minimizing the use of economic resources. The complexity of the problem resides in the random nature of the drug demand and the multiple constraints that must be taken into account in every decision. In this article, chance-constrained model predictive control is proposed to deal with this problem. The flexibility of model predictive control allows taking into account explicitly the different objectives and constraints involved in the problem while the use of chance constraints provides a trade-off between conservativeness and efficiency. The solution proposed is assessed to study its implementation in two Spanish hospitals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Nudging and predictability in regional climate modelling: investigation in a nested quasi-geostrophic model

    Science.gov (United States)

    Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas

    2010-05-01

    In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.

  14. Real-time prediction models for output power and efficiency of grid-connected solar photovoltaic systems

    International Nuclear Information System (INIS)

    Su, Yan; Chan, Lai-Cheong; Shu, Lianjie; Tsui, Kwok-Leung

    2012-01-01

    Highlights: ► We develop online prediction models for solar photovoltaic system performance. ► The proposed prediction models are simple but with reasonable accuracy. ► The maximum monthly average minutely efficiency varies 10.81–12.63%. ► The average efficiency tends to be slightly higher in winter months. - Abstract: This paper develops new real time prediction models for output power and energy efficiency of solar photovoltaic (PV) systems. These models were validated using measured data of a grid-connected solar PV system in Macau. Both time frames based on yearly average and monthly average are considered. It is shown that the prediction model for the yearly/monthly average of the minutely output power fits the measured data very well with high value of R 2 . The online prediction model for system efficiency is based on the ratio of the predicted output power to the predicted solar irradiance. This ratio model is shown to be able to fit the intermediate phase (9 am to 4 pm) very well but not accurate for the growth and decay phases where the system efficiency is near zero. However, it can still serve as a useful purpose for practitioners as most PV systems work in the most efficient manner over this period. It is shown that the maximum monthly average minutely efficiency varies over a small range of 10.81% to 12.63% in different months with slightly higher efficiency in winter months.

  15. Testing of Models for Predicting the Behaviour of Radionuclides in Freshwater Systems and Coastal Areas. Report of the Aquatic Working Group of EMRAS Theme 1

    International Nuclear Information System (INIS)

    2012-01-01

    During last decades a number of projects have been launched to validate models for predicting the behaviour of radioactive substances in the environment. Such projects took advantage from the great deal of experimental data gathered, following the accidental introduction of radionuclides into the environment (the accident at the Chernobyl power plant was the most obvious example), to assess the contamination levels of components of the ecosystem and of the human food chain. These projects stimulated intensive efforts for improving the reliability of the models aimed at predicting the migration of 137 Cs in lakes and of 137 Cs and 90 Sr in rivers. However, there are few examples of similar extensive model validation studies for other aquatic systems, such as coastal waters, or for other long lived radionuclides of potential radiological importance for freshwater systems. The validation of models for predicting the behaviour of radionuclides in the freshwater environment and coastal areas was the object of the EMRAS working group on testing of models for predicting the behaviour of radionuclides in freshwater systems and coastal areas. Five scenarios have been considered: (1) Wash-off of 90 Sr and 137 Cs deposits from the Pripyat floodplain (Ukraine). Modellers were asked to predict the time dependent water contamination of Pripyat River following the inundation of the river floodplain heavily contaminated following the Chernobyl accident. Available input data were the deposits of radionuclides in the floodplain, the time dependent contamination of water in the river entering the floodplain, the water fluxes and several other morphological, meteorological and hydrological data. Concentrations of radionuclides in the water of the River Prypiat down-stream of the floodplain were supplied to assess the performances of the models. (2) Radionuclide discharge from the Dnieper River (Ukraine) into its estuary in the Black Sea. Modellers were asked to predict the time

  16. Climate Modeling and Causal Identification for Sea Ice Predictability

    Energy Technology Data Exchange (ETDEWEB)

    Hunke, Elizabeth Clare [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urrego Blanco, Jorge Rolando [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urban, Nathan Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-12

    This project aims to better understand causes of ongoing changes in the Arctic climate system, particularly as decreasing sea ice trends have been observed in recent decades and are expected to continue in the future. As part of the Sea Ice Prediction Network, a multi-agency effort to improve sea ice prediction products on seasonal-to-interannual time scales, our team is studying sensitivity of sea ice to a collection of physical process and feedback mechanism in the coupled climate system. During 2017 we completed a set of climate model simulations using the fully coupled ACME-HiLAT model. The simulations consisted of experiments in which cloud, sea ice, and air-ocean turbulent exchange parameters previously identified as important for driving output uncertainty in climate models were perturbed to account for parameter uncertainty in simulated climate variables. We conducted a sensitivity study to these parameters, which built upon a previous study we made for standalone simulations (Urrego-Blanco et al., 2016, 2017). Using the results from the ensemble of coupled simulations, we are examining robust relationships between climate variables that emerge across the experiments. We are also using causal discovery techniques to identify interaction pathways among climate variables which can help identify physical mechanisms and provide guidance in predictability studies. This work further builds on and leverages the large ensemble of standalone sea ice simulations produced in our previous w14_seaice project.

  17. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  18. Alcator C-Mod predictive modeling

    International Nuclear Information System (INIS)

    Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas

    2001-01-01

    Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles

  19. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  20. Real-time learning of predictive recognition categories that chunk sequences of items stored in working memory

    Directory of Open Access Journals (Sweden)

    Stephen eGrossberg

    2014-10-01

    Full Text Available How are sequences of events that are temporarily stored in a cognitive working memory unitized, or chunked, through learning? Such sequential learning is needed by the brain in order to enable language, spatial understanding, and motor skills to develop. In particular, how does the brain learn categories, or list chunks, that become selectively tuned to different temporal sequences of items in lists of variable length as they are stored in working memory, and how does this learning process occur in real time? The present article introduces a neural model that simulates learning of such list chunks. In this model, sequences of items are temporarily stored in an Item-and-Order, or competitive queuing, working memory before learning categorizes them using a categorization network, called a Masking Field, which is a self-similar, multiple-scale, recurrent on-center off-surround network that can weigh the evidence for variable-length sequences of items as they are stored in the working memory through time. A Masking Field hereby activates the learned list chunks that represent the most predictive item groupings at any time, while suppressing less predictive chunks. In a network with a given number of input items, all possible ordered sets of these item sequences, up to a fixed length, can be learned with unsupervised or supervised learning. The self-similar multiple-scale properties of Masking Fields interacting with an Item-and-Order working memory provide a natural explanation of George Miller's Magical Number Seven and Nelson Cowan's Magical Number Four. The article explains why linguistic, spatial, and action event sequences may all be stored by Item-and-Order working memories that obey similar design principles, and thus how the current results may apply across modalities. Item-and-Order properties may readily be extended to Item-Order-Rank working memories in which the same item can be stored in multiple list positions, or ranks, as in the list

  1. Combining quantitative trait loci analysis with physiological models to predict genotype-specific transpiration rates.

    Science.gov (United States)

    Reuning, Gretchen A; Bauerle, William L; Mullen, Jack L; McKay, John K

    2015-04-01

    Transpiration is controlled by evaporative demand and stomatal conductance (gs ), and there can be substantial genetic variation in gs . A key parameter in empirical models of transpiration is minimum stomatal conductance (g0 ), a trait that can be measured and has a large effect on gs and transpiration. In Arabidopsis thaliana, g0 exhibits both environmental and genetic variation, and quantitative trait loci (QTL) have been mapped. We used this information to create a genetically parameterized empirical model to predict transpiration of genotypes. For the parental lines, this worked well. However, in a recombinant inbred population, the predictions proved less accurate. When based only upon their genotype at a single g0 QTL, genotypes were less distinct than our model predicted. Follow-up experiments indicated that both genotype by environment interaction and a polygenic inheritance complicate the application of genetic effects into physiological models. The use of ecophysiological or 'crop' models for predicting transpiration of novel genetic lines will benefit from incorporating further knowledge of the genetic control and degree of independence of core traits/parameters underlying gs variation. © 2014 John Wiley & Sons Ltd.

  2. Predicting Preschoolers' Attachment Security from Fathers' Involvement, Internal Working Models, and Use of Social Support

    Science.gov (United States)

    Newland, Lisa A.; Coyl, Diana D.; Freeman, Harry

    2008-01-01

    Associations between preschoolers' attachment security, fathers' involvement (i.e. parenting behaviors and consistency) and fathering context (i.e. fathers' internal working models (IWMs) and use of social support) were examined in a subsample of 102 fathers, taken from a larger sample of 235 culturally diverse US families. The authors predicted…

  3. Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science.

    Science.gov (United States)

    Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel

    2016-01-01

    One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets.

  4. The Effects of Work Values, Work-Value Congruence and Work Centrality on Organizational Citizenship Behavior

    OpenAIRE

    Başak Uçanok

    2008-01-01

    The aim of this study is to test the "work values" inventory developed by Tevruz and Turgut and to utilize the concept in a model, which aims to create a greater understanding of the work experience. In the study multiple effects of work values, work-value congruence and work centrality on organizational citizenship behavior are examined. In this respect, it is hypothesized that work values and work-value congruence predict organizational citizenship behavior through work...

  5. Simulation work of fatigue life prediction of rubber automotive components

    International Nuclear Information System (INIS)

    Samad, M S A; Ali, Aidy

    2010-01-01

    The usage of rubbers has always been so important, especially in automotive industries. Rubbers have a hyper elastic behaviour which is the ability to withstand very large strain without failure. The normal applications for rubbers are used for shock absorption, sound isolation and mounting. In this study, the predictions of fatigue life of an engine mount of rubber automotive components were presented. The finite element analysis was performed to predict the critical part and the strain output were incorporated into fatigue model for prediction. The predicted result shows agreement in term of failure location of rubber mount.

  6. Predictive Modelling of Heavy Metals in Urban Lakes

    OpenAIRE

    Lindström, Martin

    2000-01-01

    Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes. Sediment and water data for this study were collected from ten small lakes in the ...

  7. New models for predicting thermophysical properties of ionic liquid mixtures.

    Science.gov (United States)

    Huang, Ying; Zhang, Xiangping; Zhao, Yongsheng; Zeng, Shaojuan; Dong, Haifeng; Zhang, Suojiang

    2015-10-28

    Potential applications of ILs require the knowledge of the physicochemical properties of ionic liquid (IL) mixtures. In this work, a series of semi-empirical models were developed to predict the density, surface tension, heat capacity and thermal conductivity of IL mixtures. Each semi-empirical model only contains one new characteristic parameter, which can be determined using one experimental data point. In addition, as another effective tool, artificial neural network (ANN) models were also established. The two kinds of models were verified by a total of 2304 experimental data points for binary mixtures of ILs and molecular compounds. The overall average absolute deviations (AARDs) of both the semi-empirical and ANN models are less than 2%. Compared to previously reported models, these new semi-empirical models require fewer adjustable parameters and can be applied in a wider range of applications.

  8. Interactions between implicit and explicit cognition and working memory capacity in the prediction of alcohol use in at-risk adolescents

    NARCIS (Netherlands)

    Thush, C.; Wiers, R.W.H.J.; Ames, S.L.; Grenard, J.L.; Sussman, S.Y.; Stacy, A.W.

    2008-01-01

    Dual process models of addiction suggest that the influence of alcohol-related cognition might be dependent on the level of executive functioning. This study investigated if the interaction between implicit and explicit alcohol-related cognitions and working memory capacity predicted alcohol use

  9. Can We Predict Patient Wait Time?

    Science.gov (United States)

    Pianykh, Oleg S; Rosenthal, Daniel I

    2015-10-01

    The importance of patient wait-time management and predictability can hardly be overestimated: For most hospitals, it is the patient queues that drive and define every bit of clinical workflow. The objective of this work was to study the predictability of patient wait time and identify its most influential predictors. To solve this problem, we developed a comprehensive list of 25 wait-related parameters, suggested in earlier work and observed in our own experiments. All parameters were chosen as derivable from a typical Hospital Information System dataset. The parameters were fed into several time-predicting models, and the best parameter subsets, discovered through exhaustive model search, were applied to a large sample of actual patient wait data. We were able to discover the most efficient wait-time prediction factors and models, such as the line-size models introduced in this work. Moreover, these models proved to be equally accurate and computationally efficient. Finally, the selected models were implemented in our patient waiting areas, displaying predicted wait times on the monitors located at the front desks. The limitations of these models are also discussed. Optimal regression models based on wait-line sizes can provide accurate and efficient predictions for patient wait time. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  10. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. The Relationship between Ethical Culture and Unethical Behavior in Work Groups: Testing the Corporate Ethical Virtues Model

    NARCIS (Netherlands)

    S.P. Kaptein (Muel)

    2008-01-01

    textabstractThe Corporate Ethical Virtues Model, which is a model for measuring the ethical culture of organizations, has not been tested on its predictive validity. This study tests the relationship between this model and observed unethical behavior in work groups. The sample consists of 301 triads

  12. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  13. A multivariate model for predicting segmental body composition.

    Science.gov (United States)

    Tian, Simiao; Mioche, Laurence; Denis, Jean-Baptiste; Morio, Béatrice

    2013-12-01

    The aims of the present study were to propose a multivariate model for predicting simultaneously body, trunk and appendicular fat and lean masses from easily measured variables and to compare its predictive capacity with that of the available univariate models that predict body fat percentage (BF%). The dual-energy X-ray absorptiometry (DXA) dataset (52% men and 48% women) with White, Black and Hispanic ethnicities (1999-2004, National Health and Nutrition Examination Survey) was randomly divided into three sub-datasets: a training dataset (TRD), a test dataset (TED); a validation dataset (VAD), comprising 3835, 1917 and 1917 subjects. For each sex, several multivariate prediction models were fitted from the TRD using age, weight, height and possibly waist circumference. The most accurate model was selected from the TED and then applied to the VAD and a French DXA dataset (French DB) (526 men and 529 women) to assess the prediction accuracy in comparison with that of five published univariate models, for which adjusted formulas were re-estimated using the TRD. Waist circumference was found to improve the prediction accuracy, especially in men. For BF%, the standard error of prediction (SEP) values were 3.26 (3.75) % for men and 3.47 (3.95)% for women in the VAD (French DB), as good as those of the adjusted univariate models. Moreover, the SEP values for the prediction of body and appendicular lean masses ranged from 1.39 to 2.75 kg for both the sexes. The prediction accuracy was best for age < 65 years, BMI < 30 kg/m2 and the Hispanic ethnicity. The application of our multivariate model to large populations could be useful to address various public health issues.

  14. Predicting return to work for lower back pain patients receiving worker's compensation.

    Science.gov (United States)

    Lancourt, J; Kettelhut, M

    1992-06-01

    The results of a prospective study of 134 patients with lower back pain suggest that nonorganic factors are better predictors of return to work than organic findings. Patients who returned to work had fewer job, personal, or family related problems. There were no significant differences between patients who returned to work and those who did not when comparing myelograms, computed tomographic scans, or roentgenographs. The only significant difference in physical organic findings was for muscle atrophy. Patients who did not return to work had a statistically higher incidence rate of muscle atrophy. Length of time off from work was significantly related to outcome, but when patients were categorized according to time off the job, different factors predicted failure to return for patients off work for less than 6 months and patients off for more than 6 months. For patients off for less than 6 months, important predictors were a high Oswestry score, history of leg pain, family relocation, short tenure on the job, verbal magnification of pain, reports of moderate to severe pain on superficial palpation, and positive reaction to a "sham" sciatic tension test. None of these was a significant predictor for the group off for more than 6 months. For the group off work for more than 6 months, previous injuries, and stability of family living arrangements were among the significant predictors not significant for the group off less than 6 months. Using 21 factors selected from a larger group of 92 factors, three statistically significant (P less than or equal to 0.001) predictive measures were developed. These measures predicted return to work for the total sample, and for the two subgroups (off more than, or less than 6 months) more accurately than did the total set of 92 factors.

  15. Predictability in the Epidemic-Type Aftershock Sequence model of interacting triggered seismicity

    Science.gov (United States)

    Helmstetter, AgnèS.; Sornette, Didier

    2003-10-01

    As part of an effort to develop a systematic methodology for earthquake forecasting, we use a simple model of seismicity on the basis of interacting events which may trigger a cascade of earthquakes, known as the Epidemic-Type Aftershock Sequence model (ETAS). The ETAS model is constructed on a bare (unrenormalized) Omori law, the Gutenberg-Richter law, and the idea that large events trigger more numerous aftershocks. For simplicity, we do not use the information on the spatial location of earthquakes and work only in the time domain. We demonstrate the essential role played by the cascade of triggered seismicity in controlling the rate of aftershock decay as well as the overall level of seismicity in the presence of a constant external seismicity source. We offer an analytical approach to account for the yet unobserved triggered seismicity adapted to the problem of forecasting future seismic rates at varying horizons from the present. Tests presented on synthetic catalogs validate strongly the importance of taking into account all the cascades of still unobserved triggered events in order to predict correctly the future level of seismicity beyond a few minutes. We find a strong predictability if one accepts to predict only a small fraction of the large-magnitude targets. Specifically, we find a prediction gain (defined as the ratio of the fraction of predicted events over the fraction of time in alarms) equal to 21 for a fraction of alarm of 1%, a target magnitude M ≥ 6, an update time of 0.5 days between two predictions, and for realistic parameters of the ETAS model. However, the probability gains degrade fast when one attempts to predict a larger fraction of the targets. This is because a significant fraction of events remain uncorrelated from past seismicity. This delineates the fundamental limits underlying forecasting skills, stemming from an intrinsic stochastic component in these interacting triggered seismicity models. Quantitatively, the fundamental

  16. SU-E-T-479: Development and Validation of Analytical Models Predicting Secondary Neutron Radiation in Proton Therapy Applications

    International Nuclear Information System (INIS)

    Farah, J; Bonfrate, A; Donadille, L; Martinetti, F; Trompier, F; Clairand, I; De Olivera, A; Delacroix, S; Herault, J; Piau, S; Vabre, I

    2014-01-01

    Purpose: Test and validation of analytical models predicting leakage neutron exposure in passively scattered proton therapy. Methods: Taking inspiration from the literature, this work attempts to build an analytical model predicting neutron ambient dose equivalents, H*(10), within the local 75 MeV ocular proton therapy facility. MC simulations were first used to model H*(10) in the beam axis plane while considering a closed final collimator and pristine Bragg peak delivery. Next, MC-based analytical model was tested against simulation results and experimental measurements. The model was also expended in the vertical direction to enable a full 3D mapping of H*(10) inside the treatment room. Finally, the work focused on upgrading the literature model to clinically relevant configurations considering modulated beams, open collimators, patient-induced neutron fluctuations, etc. Results: The MC-based analytical model efficiently reproduced simulated H*(10) values with a maximum difference below 10%. In addition, it succeeded in predicting measured H*(10) values with differences <40%. The highest differences were registered at the closest and farthest positions from isocenter where the analytical model failed to faithfully reproduce the high neutron fluence and energy variations. The differences remains however acceptable taking into account the high measurement/simulation uncertainties and the end use of this model, i.e. radiation protection. Moreover, the model was successfully (differences < 20% on simulations and < 45% on measurements) extended to predict neutrons in the vertical direction with respect to the beam line as patients are in the upright seated position during ocular treatments. Accounting for the impact of beam modulation, collimation and the present of a patient in the beam path is far more challenging and conversion coefficients are currently being defined to predict stray neutrons in clinically representative treatment configurations. Conclusion

  17. Predictive validity of the Work Ability Index and its individual items in the general population.

    Science.gov (United States)

    Lundin, Andreas; Leijon, Ola; Vaez, Marjan; Hallgren, Mats; Torgén, Margareta

    2017-06-01

    This study assesses the predictive ability of the full Work Ability Index (WAI) as well as its individual items in the general population. The Work, Health and Retirement Study (WHRS) is a stratified random national sample of 25-75-year-olds living in Sweden in 2000 that received a postal questionnaire ( n = 6637, response rate = 53%). Current and subsequent sickness absence was obtained from registers. The ability of the WAI to predict long-term sickness absence (LTSA; ⩾ 90 consecutive days) during a period of four years was analysed by logistic regression, from which the Area Under the Receiver Operating Characteristic curve (AUC) was computed. There were 313 incident LTSA cases among 1786 employed individuals. The full WAI had acceptable ability to predict LTSA during the 4-year follow-up (AUC = 0.79; 95% CI 0.76 to 0.82). Individual items were less stable in their predictive ability. However, three of the individual items: current work ability compared with lifetime best, estimated work impairment due to diseases, and number of diagnosed current diseases, exceeded AUC > 0.70. Excluding the WAI item on number of days on sickness absence did not result in an inferior predictive ability of the WAI. The full WAI has acceptable predictive validity, and is superior to its individual items. For public health surveys, three items may be suitable proxies of the full WAI; current work ability compared with lifetime best, estimated work impairment due to diseases, and number of current diseases diagnosed by a physician.

  18. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  19. Model Predictive Control of a Wave Energy Converter with Discrete Fluid Power Power Take-Off System

    Directory of Open Access Journals (Sweden)

    Anders Hedegaard Hansen

    2018-03-01

    Full Text Available Wave power extraction algorithms for wave energy converters are normally designed without taking system losses into account leading to suboptimal power extraction. In the current work, a model predictive power extraction algorithm is designed for a discretized power take of system. It is shown how the quantized nature of a discrete fluid power system may be included in a new model predictive control algorithm leading to a significant increase in the harvested power. A detailed investigation of the influence of the prediction horizon and the time step is reported. Furthermore, it is shown how the inclusion of a loss model may increase the energy output. Based on the presented results it is concluded that power extraction algorithms based on model predictive control principles are both feasible and favorable for use in a discrete fluid power power take-off system for point absorber wave energy converters.

  20. A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments

    Directory of Open Access Journals (Sweden)

    Jing Mi

    2016-09-01

    Full Text Available Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model.

  1. A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments.

    Science.gov (United States)

    Mi, Jing; Colburn, H Steven

    2016-10-03

    Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model. © The Author(s) 2016.

  2. Predicting Factors of Worker Behavior for Proper Working Posture Based on Planed Behavior Theory

    Directory of Open Access Journals (Sweden)

    E Mohammadi Zeydi

    2008-12-01

    Introduction & Objective: Injuries resulting from ignoring proper working posture especially in employees who sitting at workplace for more than of working hours are costly, and create significant pain and discomfort. Decreasing of these injuries is most effectively accomplished through the application of ergonomic design principles. Sometimes, however, barriers (technical and economic preclude ergonomic improvement and, consequently, some organizations rely on the use of proper sitting techniques and maintaining proper working posture as a major control strategy during workday. The problem, however, is that these process performing is inconsistent and managers have a difficult time motivating use of these techniques. The main aim of this study was to understand the factors driving proper working posture among employees. Materials & Methods: This study used the theory of planned behavior to predict upright working posture maintenance among 222 of assembling, machinery and printing line’s employees at a Qazvin Alborz industrial town manufacturing organization. Structural equation modeling, explanatory and confirmatory factor analysis were employed to analyze relationships among constructs. Results: Results revealed that attitude (p< 0.05, β= 0.53 and intention (p< 0.05, β= 0.46 were the strongest predictors of proper working posture maintenance behavior. Perceived behavior control, to a lesser degree, were also important influences on intention (p< 0.05, β= 0.34 and behavior (p< 0.05, β= 0.28. Subjective norms did not surface as effective direct predictors of upright working posture maintenance, but did affect behavior and intent via mediating factors (attitudes subjective norms and perceived behavioral control. Finally, the TPB was supported as an effective model explaining upright working posture maintenance, and had potential application for many other safety-related behaviors. Conclusion: results of this study emphasis on considering factors such as

  3. Predicting chick body mass by artificial intelligence-based models

    Directory of Open Access Journals (Sweden)

    Patricia Ferreira Ponciano Ferraz

    2014-07-01

    Full Text Available The objective of this work was to develop, validate, and compare 190 artificial intelligence-based models for predicting the body mass of chicks from 2 to 21 days of age subjected to different duration and intensities of thermal challenge. The experiment was conducted inside four climate-controlled wind tunnels using 210 chicks. A database containing 840 datasets (from 2 to 21-day-old chicks - with the variables dry-bulb air temperature, duration of thermal stress (days, chick age (days, and the daily body mass of chicks - was used for network training, validation, and tests of models based on artificial neural networks (ANNs and neuro-fuzzy networks (NFNs. The ANNs were most accurate in predicting the body mass of chicks from 2 to 21 days of age after they were subjected to the input variables, and they showed an R² of 0.9993 and a standard error of 4.62 g. The ANNs enable the simulation of different scenarios, which can assist in managerial decision-making, and they can be embedded in the heating control systems.

  4. Dynamic Simulation of Human Gait Model With Predictive Capability.

    Science.gov (United States)

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  5. Estimating confidence intervals in predicted responses for oscillatory biological models.

    Science.gov (United States)

    St John, Peter C; Doyle, Francis J

    2013-07-29

    show that a model's dynamic characteristics follow directly from experimental data and model structure, relaxing assumptions on the particular parameters chosen. Ultimately, this work highlights the importance of continued collection of high-resolution data on gene and protein activity levels, as they allow the development of predictive mathematical models.

  6. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  7. Action Prediction Allows Hypothesis Testing via Internal Forward Models at 6 Months of Age

    Directory of Open Access Journals (Sweden)

    Gustaf Gredebäck

    2018-03-01

    Full Text Available We propose that action prediction provides a cornerstone in a learning process known as internal forward models. According to this suggestion infants’ predictions (looking to the mouth of someone moving a spoon upward will moments later be validated or proven false (spoon was in fact directed toward a bowl, information that is directly perceived as the distance between the predicted and actual goal. Using an individual difference approach we demonstrate that action prediction correlates with the tendency to react with surprise when social interactions are not acted out as expected (action evaluation. This association is demonstrated across tasks and in a large sample (n = 118 at 6 months of age. These results provide the first indication that infants might rely on internal forward models to structure their social world. Additional analysis, consistent with prior work and assumptions from embodied cognition, demonstrates that the latency of infants’ action predictions correlate with the infant’s own manual proficiency.

  8. Predictive factors of work disability in rheumatoid arthritis: a systematic literature review.

    NARCIS (Netherlands)

    Croon, de E.M.; Sluiter, J.K.; Nijssen, TF; Dijkmans, B.A.C.; Lankhorst, G.J.; Frings-Dresen, MH

    2004-01-01

    BACKGROUND: Work disability-a common outcome of rheumatoid arthritis (RA)-is a societal (for example, financial costs) and individual problem (for example, loss of status, income, social support, and distraction from pain and distress). Until now, factors that predict work disability in RA have not

  9. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  11. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  12. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  13. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  14. [A Model for Predicting Career Satisfaction of Nurses Experiencing Rotation].

    Science.gov (United States)

    Shin, Sook; Yu, Mi

    2017-08-01

    This study aimed to present and test a structural model for describing and predicting the factors affecting subjective career satisfaction of nurses experiencing rotation and to develop human resources management strategies for promoting their career satisfaction related to rotation. In this cross-sectional study, we recruited 233 nurses by convenience sampling who had over 1 year of career experience and who had experienced rotation at least once at G university hospital. Data were collected from August to September in 2016 using self-reported questionnaires. The exogenous variables consisted of rotation perception and rotation stress. Endogenous variables consisted of career growth opportunity, work engagement, and subjective career satisfaction. A hypothetical model was tested by asymptotically distribution-free estimates, and model goodness of fit was examined using absolute fit, incremental fit measures. The final model was approved and had suitable fit. We found that subjective career satisfaction was directly affected by rotation stress (β=.20, p=.019) and work engagement (β=.58, pcareer growth opportunity and work engagement. However, there was no total effect of rotation stress on subjective career satisfaction (β=-.09, p=.270). Career growth opportunity directly and indirectly affected subjective career satisfaction (β=.29, pcareer satisfaction. The results of this study suggest that it is necessary to establish systematic and planned criteria for rotation so that nurses can grow and develop through sustained work and become satisfied with their career. © 2017 Korean Society of Nursing Science

  15. Risk predictive modelling for diabetes and cardiovascular disease.

    Science.gov (United States)

    Kengne, Andre Pascal; Masconi, Katya; Mbanya, Vivian Nchanchou; Lekoubou, Alain; Echouffo-Tcheugui, Justin Basile; Matsha, Tandi E

    2014-02-01

    Absolute risk models or clinical prediction models have been incorporated in guidelines, and are increasingly advocated as tools to assist risk stratification and guide prevention and treatments decisions relating to common health conditions such as cardiovascular disease (CVD) and diabetes mellitus. We have reviewed the historical development and principles of prediction research, including their statistical underpinning, as well as implications for routine practice, with a focus on predictive modelling for CVD and diabetes. Predictive modelling for CVD risk, which has developed over the last five decades, has been largely influenced by the Framingham Heart Study investigators, while it is only ∼20 years ago that similar efforts were started in the field of diabetes. Identification of predictive factors is an important preliminary step which provides the knowledge base on potential predictors to be tested for inclusion during the statistical derivation of the final model. The derived models must then be tested both on the development sample (internal validation) and on other populations in different settings (external validation). Updating procedures (e.g. recalibration) should be used to improve the performance of models that fail the tests of external validation. Ultimately, the effect of introducing validated models in routine practice on the process and outcomes of care as well as its cost-effectiveness should be tested in impact studies before wide dissemination of models beyond the research context. Several predictions models have been developed for CVD or diabetes, but very few have been externally validated or tested in impact studies, and their comparative performance has yet to be fully assessed. A shift of focus from developing new CVD or diabetes prediction models to validating the existing ones will improve their adoption in routine practice.

  16. Impact of different satellite soil moisture products on the predictions of a continuous distributed hydrological model

    Science.gov (United States)

    Laiolo, P.; Gabellani, S.; Campo, L.; Silvestro, F.; Delogu, F.; Rudari, R.; Pulvirenti, L.; Boni, G.; Fascetti, F.; Pierdicca, N.; Crapolicchio, R.; Hasenauer, S.; Puca, S.

    2016-06-01

    The reliable estimation of hydrological variables in space and time is of fundamental importance in operational hydrology to improve the flood predictions and hydrological cycle description. Nowadays remotely sensed data can offer a chance to improve hydrological models especially in environments with scarce ground based data. The aim of this work is to update the state variables of a physically based, distributed and continuous hydrological model using four different satellite-derived data (three soil moisture products and a land surface temperature measurement) and one soil moisture analysis to evaluate, even with a non optimal technique, the impact on the hydrological cycle. The experiments were carried out for a small catchment, in the northern part of Italy, for the period July 2012-June 2013. The products were pre-processed according to their own characteristics and then they were assimilated into the model using a simple nudging technique. The benefits on the model predictions of discharge were tested against observations. The analysis showed a general improvement of the model discharge predictions, even with a simple assimilation technique, for all the assimilation experiments; the Nash-Sutcliffe model efficiency coefficient was increased from 0.6 (relative to the model without assimilation) to 0.7, moreover, errors on discharge were reduced up to the 10%. An added value to the model was found in the rainfall season (autumn): all the assimilation experiments reduced the errors up to the 20%. This demonstrated that discharge prediction of a distributed hydrological model, which works at fine scale resolution in a small basin, can be improved with the assimilation of coarse-scale satellite-derived data.

  17. Comparative Analysis of Soft Computing Models in Prediction of Bending Rigidity of Cotton Woven Fabrics

    Science.gov (United States)

    Guruprasad, R.; Behera, B. K.

    2015-10-01

    Quantitative prediction of fabric mechanical properties is an essential requirement for design engineering of textile and apparel products. In this work, the possibility of prediction of bending rigidity of cotton woven fabrics has been explored with the application of Artificial Neural Network (ANN) and two hybrid methodologies, namely Neuro-genetic modeling and Adaptive Neuro-Fuzzy Inference System (ANFIS) modeling. For this purpose, a set of cotton woven grey fabrics was desized, scoured and relaxed. The fabrics were then conditioned and tested for bending properties. With the database thus created, a neural network model was first developed using back propagation as the learning algorithm. The second model was developed by applying a hybrid learning strategy, in which genetic algorithm was first used as a learning algorithm to optimize the number of neurons and connection weights of the neural network. The Genetic algorithm optimized network structure was further allowed to learn using back propagation algorithm. In the third model, an ANFIS modeling approach was attempted to map the input-output data. The prediction performances of the models were compared and a sensitivity analysis was reported. The results show that the prediction by neuro-genetic and ANFIS models were better in comparison with that of back propagation neural network model.

  18. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...

  19. Survival prediction model for postoperative hepatocellular carcinoma patients.

    Science.gov (United States)

    Ren, Zhihui; He, Shasha; Fan, Xiaotang; He, Fangping; Sang, Wei; Bao, Yongxing; Ren, Weixin; Zhao, Jinming; Ji, Xuewen; Wen, Hao

    2017-09-01

    This study is to establish a predictive index (PI) model of 5-year survival rate for patients with hepatocellular carcinoma (HCC) after radical resection and to evaluate its prediction sensitivity, specificity, and accuracy.Patients underwent HCC surgical resection were enrolled and randomly divided into prediction model group (101 patients) and model evaluation group (100 patients). Cox regression model was used for univariate and multivariate survival analysis. A PI model was established based on multivariate analysis and receiver operating characteristic (ROC) curve was drawn accordingly. The area under ROC (AUROC) and PI cutoff value was identified.Multiple Cox regression analysis of prediction model group showed that neutrophil to lymphocyte ratio, histological grade, microvascular invasion, positive resection margin, number of tumor, and postoperative transcatheter arterial chemoembolization treatment were the independent predictors for the 5-year survival rate for HCC patients. The model was PI = 0.377 × NLR + 0.554 × HG + 0.927 × PRM + 0.778 × MVI + 0.740 × NT - 0.831 × transcatheter arterial chemoembolization (TACE). In the prediction model group, AUROC was 0.832 and the PI cutoff value was 3.38. The sensitivity, specificity, and accuracy were 78.0%, 80%, and 79.2%, respectively. In model evaluation group, AUROC was 0.822, and the PI cutoff value was well corresponded to the prediction model group with sensitivity, specificity, and accuracy of 85.0%, 83.3%, and 84.0%, respectively.The PI model can quantify the mortality risk of hepatitis B related HCC with high sensitivity, specificity, and accuracy.

  20. Methodology for Designing Models Predicting Success of Infertility Treatment

    OpenAIRE

    Alireza Zarinara; Mohammad Mahdi Akhondi; Hojjat Zeraati; Koorsh Kamali; Kazem Mohammad

    2016-01-01

    Abstract Background: The prediction models for infertility treatment success have presented since 25 years ago. There are scientific principles for designing and applying the prediction models that is also used to predict the success rate of infertility treatment. The purpose of this study is to provide basic principles for designing the model to predic infertility treatment success. Materials and Methods: In this paper, the principles for developing predictive models are explained and...

  1. Predictive uncertainty reduction in coupled neutron-kinetics/thermal hydraulics modeling of the BWR-TT2 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Badea, Aurelian F., E-mail: aurelian.badea@kit.edu [Karlsruhe Institute of Technology, Vincenz-Prießnitz-Str. 3, 76131 Karlsruhe (Germany); Cacuci, Dan G. [Center for Nuclear Science and Energy/Dept. of ME, University of South Carolina, 300 Main Street, Columbia, SC 29208 (United States)

    2017-03-15

    Highlights: • BWR Turbine Trip 2 (BWR-TT2) benchmark. • Substantial (up to 50%) reduction of uncertainties in the predicted transient power. • 6660 uncertain model parameters were calibrated. - Abstract: By applying a comprehensive predictive modeling methodology, this work demonstrates a substantial (up to 50%) reduction of uncertainties in the predicted total transient power in the BWR Turbine Trip 2 (BWR-TT2) benchmark while calibrating the numerical simulation of this benchmark, comprising 6090 macroscopic cross sections, and 570 thermal-hydraulics parameters involved in modeling the phase-slip correlation, transient outlet pressure, and total mass flow. The BWR-TT2 benchmark is based on an experiment that was carried out in 1977 in the NPP Peach Bottom 2, involving the closure of the turbine stop valve which caused a pressure wave that propagated with attenuation into the reactor core. The condensation of the steam in the reactor core caused by the pressure increase led to a positive reactivity insertion. The subsequent rise of power was limited by the feedback and the insertion of the control rods. The BWR-TT2 benchmark was modeled with the three-dimensional reactor physics code system DYN3D, by coupling neutron kinetics with two-phase thermal-hydraulics. All 6660 DYN3D model parameters were calibrated by applying a predictive modeling methodology that combines experimental and computational information to produce optimally predicted best-estimate results with reduced predicted uncertainties. Simultaneously, the predictive modeling methodology yields optimally predicted values for the BWR total transient power while reducing significantly the accompanying predicted standard deviations.

  2. Predictive uncertainty reduction in coupled neutron-kinetics/thermal hydraulics modeling of the BWR-TT2 benchmark

    International Nuclear Information System (INIS)

    Badea, Aurelian F.; Cacuci, Dan G.

    2017-01-01

    Highlights: • BWR Turbine Trip 2 (BWR-TT2) benchmark. • Substantial (up to 50%) reduction of uncertainties in the predicted transient power. • 6660 uncertain model parameters were calibrated. - Abstract: By applying a comprehensive predictive modeling methodology, this work demonstrates a substantial (up to 50%) reduction of uncertainties in the predicted total transient power in the BWR Turbine Trip 2 (BWR-TT2) benchmark while calibrating the numerical simulation of this benchmark, comprising 6090 macroscopic cross sections, and 570 thermal-hydraulics parameters involved in modeling the phase-slip correlation, transient outlet pressure, and total mass flow. The BWR-TT2 benchmark is based on an experiment that was carried out in 1977 in the NPP Peach Bottom 2, involving the closure of the turbine stop valve which caused a pressure wave that propagated with attenuation into the reactor core. The condensation of the steam in the reactor core caused by the pressure increase led to a positive reactivity insertion. The subsequent rise of power was limited by the feedback and the insertion of the control rods. The BWR-TT2 benchmark was modeled with the three-dimensional reactor physics code system DYN3D, by coupling neutron kinetics with two-phase thermal-hydraulics. All 6660 DYN3D model parameters were calibrated by applying a predictive modeling methodology that combines experimental and computational information to produce optimally predicted best-estimate results with reduced predicted uncertainties. Simultaneously, the predictive modeling methodology yields optimally predicted values for the BWR total transient power while reducing significantly the accompanying predicted standard deviations.

  3. Feature-Based and String-Based Models for Predicting RNA-Protein Interaction

    Directory of Open Access Journals (Sweden)

    Donald Adjeroh

    2018-03-01

    Full Text Available In this work, we study two approaches for the problem of RNA-Protein Interaction (RPI. In the first approach, we use a feature-based technique by combining extracted features from both sequences and secondary structures. The feature-based approach enhanced the prediction accuracy as it included much more available information about the RNA-protein pairs. In the second approach, we apply search algorithms and data structures to extract effective string patterns for prediction of RPI, using both sequence information (protein and RNA sequences, and structure information (protein and RNA secondary structures. This led to different string-based models for predicting interacting RNA-protein pairs. We show results that demonstrate the effectiveness of the proposed approaches, including comparative results against leading state-of-the-art methods.

  4. Visuospatial Working Memory Capacity Predicts Physiological Arousal in a Narrative Task.

    Science.gov (United States)

    Smithson, Lisa; Nicoladis, Elena

    2016-06-01

    Physiological arousal that occurs during narrative production is thought to reflect emotional processing and cognitive effort (Bar-Haim et al. in Dev Psychobiol 44:238-249, 2004). The purpose of this study was to determine whether individual differences in visuospatial working memory and/or verbal working memory capacity predict physiological arousal in a narrative task. Visuospatial working memory was a significant predictor of skin conductance level (SCL); verbal working memory was not. When visuospatial working memory interference was imposed, visuospatial working memory was no longer a significant predictor of SCL. Visuospatial interference also resulted in a significant reduction in SCL. Furthermore, listener ratings of narrative quality were contingent upon the visuospatial working memory resources of the narrator. Potential implications for educators and clinical practitioners are discussed.

  5. Self-Determination and Meaningful Work: Exploring Socioeconomic Constraints

    OpenAIRE

    Allan, Blake A.; Autin, Kelsey L.; Duffy, Ryan D.

    2016-01-01

    This study examined a model of meaningful work among a diverse sample of working adults. From the perspectives of Self-Determination Theory and the Psychology of Working Framework, we tested a structural model with social class and work volition predicting SDT motivation variables, which in turn predicted meaningful work. Partially supporting hypotheses, work volition was positively related to internal regulation and negatively related to amotivation, whereas social class was positively relat...

  6. A Work Psychological Model that Works: Expanding the Job Demands-Resources Model

    NARCIS (Netherlands)

    Xanthopoulou, D.

    2007-01-01

    The main purpose of the current thesis was to test and expand the recently developed Job Demands-Resources (JD-R) model. The advantage of this model is that it recognizes the uniqueness of each work environment, which has its own specific job demands and job resources. According to the JD-R model,

  7. Nonlinear Model Predictive Control of a Cable-Robot-Based Motion Simulator

    DEFF Research Database (Denmark)

    Katliar, Mikhail; Fischer, Joerg; Frison, Gianluca

    2017-01-01

    In this paper we present the implementation of a model-predictive controller (MPC) for real-time control of a cable-robot-based motion simulator. The controller computes control inputs such that a desired acceleration and angular velocity at a defined point in simulator's cabin are tracked while...... satisfying constraints imposed by working space and allowed cable forces of the robot. In order to fully use the simulator capabilities, we propose an approach that includes the motion platform actuation in the MPC model. The tracking performance and computation time of the algorithm are investigated...

  8. A Simplified Micromechanical Modeling Approach to Predict the Tensile Flow Curve Behavior of Dual-Phase Steels

    Science.gov (United States)

    Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal

    2017-11-01

    Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.

  9. I. WORKING MEMORY CAPACITY IN CONTEXT: MODELING DYNAMIC PROCESSES OF BEHAVIOR, MEMORY, AND DEVELOPMENT.

    Science.gov (United States)

    Simmering, Vanessa R

    2016-09-01

    Working memory is a vital cognitive skill that underlies a broad range of behaviors. Higher cognitive functions are reliably predicted by working memory measures from two domains: children's performance on complex span tasks, and infants' performance in looking paradigms. Despite the similar predictive power across these research areas, theories of working memory development have not connected these different task types and developmental periods. The current project takes a first step toward bridging this gap by presenting a process-oriented theory, focusing on two tasks designed to assess visual working memory capacity in infants (the change-preference task) versus children and adults (the change detection task). Previous studies have shown inconsistent results, with capacity estimates increasing from one to four items during infancy, but only two to three items during early childhood. A probable source of this discrepancy is the different task structures used with each age group, but prior theories were not sufficiently specific to explain how performance relates across tasks. The current theory focuses on cognitive dynamics, that is, how memory representations are formed, maintained, and used within specific task contexts over development. This theory was formalized in a computational model to generate three predictions: 1) capacity estimates in the change-preference task should continue to increase beyond infancy; 2) capacity estimates should be higher in the change-preference versus change detection task when tested within individuals; and 3) performance should correlate across tasks because both rely on the same underlying memory system. I also tested a fourth prediction, that development across tasks could be explained through increasing real-time stability, realized computationally as strengthening connectivity within the model. Results confirmed these predictions, supporting the cognitive dynamics account of performance and developmental changes in real

  10. Validation of models using Chernobyl fallout data from the Central Bohemia region of the Czech Republic. Scenario CB. First report of the VAMP Multiple Pathways Assessment Working Group. Part of the IAEA/CEC Co-ordinated Research Programme on the Validation of Environmental Model Predictions (VAMP)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-04-01

    The VAMP Multiple Pathways Assessment Working Group is an international forum for the testing and comparison of model predictions. The emphasis is on evaluating transfer from the environment to human via all pathways which are relevant in the environment being considered. This document is the first report of the Group and contains the results of the first test exercise on the validation of multiple pathways assessment models using Chernobyl fallout data obtained from the Central Bohemia (CB) region of the Czech Republic (Scenario CB). The report includes the following three appendixes: Documentation and evaluation of model validation data used in scenario CB (3 papers), Description of models used in scenario CB (1 paper), Individual evaluations of model predictions for scenario CB (13 papers). A separate abstract was prepared for each paper. Refs, figs and tabs.

  11. Validation of models using Chernobyl fallout data from the Central Bohemia region of the Czech Republic. Scenario CB. First report of the VAMP Multiple Pathways Assessment Working Group. Part of the IAEA/CEC Co-ordinated Research Programme on the Validation of Environmental Model Predictions (VAMP)

    International Nuclear Information System (INIS)

    1995-04-01

    The VAMP Multiple Pathways Assessment Working Group is an international forum for the testing and comparison of model predictions. The emphasis is on evaluating transfer from the environment to human via all pathways which are relevant in the environment being considered. This document is the first report of the Group and contains the results of the first test exercise on the validation of multiple pathways assessment models using Chernobyl fallout data obtained from the Central Bohemia (CB) region of the Czech Republic (Scenario CB). The report includes the following three appendixes: Documentation and evaluation of model validation data used in scenario CB (3 papers), Description of models used in scenario CB (1 paper), Individual evaluations of model predictions for scenario CB (13 papers). A separate abstract was prepared for each paper. Refs, figs and tabs

  12. Prediction of Proper Temperatures for the Hot Stamping Process Based on the Kinetics Models

    Science.gov (United States)

    Samadian, P.; Parsa, M. H.; Mirzadeh, H.

    2015-02-01

    Nowadays, the application of kinetics models for predicting microstructures of steels subjected to thermo-mechanical treatments has increased to minimize direct experimentation, which is costly and time consuming. In the current work, the final microstructures of AISI 4140 steel sheets after the hot stamping process were predicted using the Kirkaldy and Li kinetics models combined with new thermodynamically based models in order for the determination of the appropriate process temperatures. In this way, the effect of deformation during hot stamping on the Ae3, Acm, and Ae1 temperatures was considered, and then the equilibrium volume fractions of phases at different temperatures were calculated. Moreover, the ferrite transformation rate equations of the Kirkaldy and Li models were modified by a term proposed by Åkerström to consider the influence of plastic deformation. Results showed that the modified Kirkaldy model is satisfactory for the determination of appropriate austenitization temperatures for the hot stamping process of AISI 4140 steel sheets because of agreeable microstructure predictions in comparison with the experimental observations.

  13. Predictive Modeling of Fast-Curing Thermosets in Nozzle-Based Extrusion

    Science.gov (United States)

    Xie, Jingjin; Randolph, Robert; Simmons, Gary; Hull, Patrick V.; Mazzeo, Aaron D.

    2017-01-01

    This work presents an approach to modeling the dynamic spreading and curing behavior of thermosets in nozzle-based extrusions. Thermosets cover a wide range of materials, some of which permit low-temperature processing with subsequent high-temperature and high-strength working properties. Extruding thermosets may overcome the limited working temperatures and strengths of conventional thermoplastic materials used in additive manufacturing. This project aims to produce technology for the fabrication of thermoset-based structures leveraging advances made in nozzle-based extrusion, such as fused deposition modeling (FDM), material jetting, and direct writing. Understanding the synergistic interactions between spreading and fast curing of extruded thermosetting materials will provide essential insights for applications that require accurate dimensional controls, such as additive manufacturing [1], [2] and centrifugal coating/forming [3]. Two types of thermally curing thermosets -- one being a soft silicone (Ecoflex 0050) and the other being a toughened epoxy (G/Flex) -- served as the test materials in this work to obtain models for cure kinetics and viscosity. The developed models align with extensive measurements made with differential scanning calorimetry (DSC) and rheology. DSC monitors the change in the heat of reaction, which reflects the rate and degree of cure at different crosslinking stages. Rheology measures the change in complex viscosity, shear moduli, yield stress, and other properties dictated by chemical composition. By combining DSC and rheological measurements, it is possible to establish a set of models profiling the cure kinetics and chemorheology without prior knowledge of chemical composition, which is usually necessary for sophisticated mechanistic modeling. In this work, we conducted both isothermal and dynamic measurements with both DSC and rheology. With the developed models, numerical simulations yielded predictions of diameter and height of

  14. Recent developments in predictive uncertainty assessment based on the model conditional processor approach

    Directory of Open Access Journals (Sweden)

    G. Coccia

    2011-10-01

    Full Text Available The work aims at discussing the role of predictive uncertainty in flood forecasting and flood emergency management, its relevance to improve the decision making process and the techniques to be used for its assessment.

    Real time flood forecasting requires taking into account predictive uncertainty for a number of reasons. Deterministic hydrological/hydraulic forecasts give useful information about real future events, but their predictions, as usually done in practice, cannot be taken and used as real future occurrences but rather used as pseudo-measurements of future occurrences in order to reduce the uncertainty of decision makers. Predictive Uncertainty (PU is in fact defined as the probability of occurrence of a future value of a predictand (such as water level, discharge or water volume conditional upon prior observations and knowledge as well as on all the information we can obtain on that specific future value from model forecasts. When dealing with commensurable quantities, as in the case of floods, PU must be quantified in terms of a probability distribution function which will be used by the emergency managers in their decision process in order to improve the quality and reliability of their decisions.

    After introducing the concept of PU, the presently available processors are introduced and discussed in terms of their benefits and limitations. In this work the Model Conditional Processor (MCP has been extended to the possibility of using two joint Truncated Normal Distributions (TNDs, in order to improve adaptation to low and high flows.

    The paper concludes by showing the results of the application of the MCP on two case studies, the Po river in Italy and the Baron Fork river, OK, USA. In the Po river case the data provided by the Civil Protection of the Emilia Romagna region have been used to implement an operational example, where the predicted variable is the observed water level. In the Baron Fork River

  15. A new Predictive Model for Relativistic Electrons in Outer Radiation Belt

    Science.gov (United States)

    Chen, Y.

    2017-12-01

    Relativistic electrons trapped in the Earth's outer radiation belt present a highly hazardous radiation environment for spaceborne electronics. These energetic electrons, with kinetic energies up to several megaelectron-volt (MeV), manifest a highly dynamic and event-specific nature due to the delicate interplay of competing transport, acceleration and loss processes. Therefore, developing a forecasting capability for outer belt MeV electrons has long been a critical and challenging task for the space weather community. Recently, the vital roles of electron resonance with waves (including such as chorus and electromagnetic ion cyclotron) have been widely recognized; however, it is still difficult for current diffusion radiation belt models to reproduce the behavior of MeV electrons during individual geomagnetic storms, mainly because of the large uncertainties existing in input parameters. In this work, we expanded our previous cross-energy cross-pitch-angle coherence study and developed a new predictive model for MeV electrons over a wide range of L-shells inside the outer radiation belt. This new model uses NOAA POES observations from low-Earth-orbits (LEOs) as inputs to provide high-fidelity nowcast (multiple hour prediction) and forecast (> 1 day prediction) of the energization of MeV electrons as well as the evolving MeV electron distributions afterwards during storms. Performance of the predictive model is quantified by long-term in situ data from Van Allen Probes and LANL GEO satellites. This study adds new science significance to an existing LEO space infrastructure, and provides reliable and powerful tools to the whole space community.

  16. An Integrated Model to Predict Corporate Failure of Listed Companies in Sri Lanka

    Directory of Open Access Journals (Sweden)

    Nisansala Wijekoon

    2015-07-01

    Full Text Available The primary objective of this study is to develop an integrated model to predict corporate failure of listed companies in Sri Lanka. The logistic regression analysis was employed to a data set of 70 matched-pairs of failed and non-failed companies listed in the Colombo Stock Exchange (CSE in Sri Lanka over the period 2002 to 2010. A total of fifteen financial ratios and eight corporate governance variables were used as predictor variables of corporate failure. Analysis of the statistical testing results indicated that model consists with both corporate governance variables and financial ratios improved the prediction accuracy to reach 88.57 per cent one year prior to failure. Furthermore, predictive accuracy of this model in all three years prior to failure is above 80 per cent. Hence model is robust in obtaining accurate results for up to three years prior to failure. It was further found that two financial ratios, working capital to total assets and cash flow from operating activities to total assets, and two corporate governance variables, outside director ratio and company audit committee are having more explanatory power to predict corporate failure. Therefore, model developed in this study can assist investors, managers, shareholders, financial institutions, auditors and regulatory agents in Sri Lanka to forecast corporate failure of listed companies.

  17. Evaluation of a new CNRM-CM6 model version for seasonal climate predictions

    Science.gov (United States)

    Volpi, Danila; Ardilouze, Constantin; Batté, Lauriane; Dorel, Laurant; Guérémy, Jean-François; Déqué, Michel

    2017-04-01

    This work presents the quality assessment of a new version of the Météo-France coupled climate prediction system, which has been developed in the EU COPERNICUS Climate Change Services framework to carry out seasonal forecast. The system is based on the CNRM-CM6 model, with Arpege-Surfex 6.2.2 as atmosphere/land component and Nemo 3.2 as ocean component, which has directly embedded the sea-ice component Gelato 6.0. In order to have a robust diagnostic, the experiment is composed by 60 ensemble members generated with stochastic dynamic perturbations. The experiment has been performed over a 37-year re-forecast period from 1979 to 2015, with two start dates per year, respectively in May 1st and November 1st. The evaluation of the predictive skill of the model is shown under two perspectives: on the one hand, the ability of the model to faithfully respond to positive or negative ENSO, NAO and QBO events, independently of the predictability of these events. Such assessment is carried out through a composite analysis, and shows that the model succeeds in reproducing the main patterns for 2-meter temperature, precipitation and geopotential height at 500 hPa during the winter season. On the other hand, the model predictive skill of the same events (positive and negative ENSO, NAO and QBO) is evaluated.

  18. Parameter estimation techniques and uncertainty in ground water flow model predictions

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Davis, P.A.

    1990-01-01

    Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs

  19. Modelling Cooperative Work at a Medical Department

    DEFF Research Database (Denmark)

    Christensen, Lars Rune; Hildebrandt, Thomas

    2017-01-01

    Based on ethnographic fieldwork, and the modelling of work processes at a medical department, this paper considers some of the opportunities and challenges involved in working with models in a complex work setting. The paper introduces a flexible modelling tool to CSCW, called the DCR Portal......, and considers how it may be used to model complex work settings collaboratively. Further, the paper discusses how models created with the DCR portal may potentially play a key role in making a cooperative work ensemble appreciate, discuss and coordinate key interdependencies inherent to their cooperative work...

  20. Who will have Sustainable Employment After a Back Injury? The Development of a Clinical Prediction Model in a Cohort of Injured Workers.

    Science.gov (United States)

    Shearer, Heather M; Côté, Pierre; Boyle, Eleanor; Hayden, Jill A; Frank, John; Johnson, William G

    2017-09-01

    Purpose Our objective was to develop a clinical prediction model to identify workers with sustainable employment following an episode of work-related low back pain (LBP). Methods We used data from a cohort study of injured workers with incident LBP claims in the USA to predict employment patterns 1 and 6 months following a workers' compensation claim. We developed three sequential models to determine the contribution of three domains of variables: (1) basic demographic/clinical variables; (2) health-related variables; and (3) work-related factors. Multivariable logistic regression was used to develop the predictive models. We constructed receiver operator curves and used the c-index to measure predictive accuracy. Results Seventy-nine percent and 77 % of workers had sustainable employment at 1 and 6 months, respectively. Sustainable employment at 1 month was predicted by initial back pain intensity, mental health-related quality of life, claim litigation and employer type (c-index = 0.77). At 6 months, sustainable employment was predicted by physical and mental health-related quality of life, claim litigation and employer type (c-index = 0.77). Adding health-related and work-related variables to models improved predictive accuracy by 8.5 and 10 % at 1 and 6 months respectively. Conclusion We developed clinically-relevant models to predict sustainable employment in injured workers who made a workers' compensation claim for LBP. Inquiring about back pain intensity, physical and mental health-related quality of life, claim litigation and employer type may be beneficial in developing programs of care. Our models need to be validated in other populations.

  1. Predicting future conflict between team-members with parameter-free models of social networks

    Science.gov (United States)

    Rovira-Asenjo, Núria; Gumí, Tània; Sales-Pardo, Marta; Guimerà, Roger

    2013-06-01

    Despite the well-documented benefits of working in teams, teamwork also results in communication, coordination and management costs, and may lead to personal conflict between team members. In a context where teams play an increasingly important role, it is of major importance to understand conflict and to develop diagnostic tools to avert it. Here, we investigate empirically whether it is possible to quantitatively predict future conflict in small teams using parameter-free models of social network structure. We analyze data of conflict appearance and resolution between 86 team members in 16 small teams, all working in a real project for nine consecutive months. We find that group-based models of complex networks successfully anticipate conflict in small teams whereas micro-based models of structural balance, which have been traditionally used to model conflict, do not.

  2. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    Science.gov (United States)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  3. QSAR models for prediction of chromatographic behavior of homologous Fab variants.

    Science.gov (United States)

    Robinson, Julie R; Karkov, Hanne S; Woo, James A; Krogh, Berit O; Cramer, Steven M

    2017-06-01

    While quantitative structure activity relationship (QSAR) models have been employed successfully for the prediction of small model protein chromatographic behavior, there have been few reports to date on the use of this methodology for larger, more complex proteins. Recently our group generated focused libraries of antibody Fab fragment variants with different combinations of surface hydrophobicities and electrostatic potentials, and demonstrated that the unique selectivities of multimodal resins can be exploited to separate these Fab variants. In this work, results from linear salt gradient experiments with these Fabs were employed to develop QSAR models for six chromatographic systems, including multimodal (Capto MMC, Nuvia cPrime, and two novel ligand prototypes), hydrophobic interaction chromatography (HIC; Capto Phenyl), and cation exchange (CEX; CM Sepharose FF) resins. The models utilized newly developed "local descriptors" to quantify changes around point mutations in the Fab libraries as well as novel cluster descriptors recently introduced by our group. Subsequent rounds of feature selection and linearized machine learning algorithms were used to generate robust, well-validated models with high training set correlations (R 2  > 0.70) that were well suited for predicting elution salt concentrations in the various systems. The developed models then were used to predict the retention of a deamidated Fab and isotype variants, with varying success. The results represent the first successful utilization of QSAR for the prediction of chromatographic behavior of complex proteins such as Fab fragments in multimodal chromatographic systems. The framework presented here can be employed to facilitate process development for the purification of biological products from product-related impurities by in silico screening of resin alternatives. Biotechnol. Bioeng. 2017;114: 1231-1240. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. Modeling and Control of CSTR using Model based Neural Network Predictive Control

    OpenAIRE

    Shrivastava, Piyush

    2012-01-01

    This paper presents a predictive control strategy based on neural network model of the plant is applied to Continuous Stirred Tank Reactor (CSTR). This system is a highly nonlinear process; therefore, a nonlinear predictive method, e.g., neural network predictive control, can be a better match to govern the system dynamics. In the paper, the NN model and the way in which it can be used to predict the behavior of the CSTR process over a certain prediction horizon are described, and some commen...

  5. Consensus models to predict endocrine disruption for all ...

    Science.gov (United States)

    Humans are potentially exposed to tens of thousands of man-made chemicals in the environment. It is well known that some environmental chemicals mimic natural hormones and thus have the potential to be endocrine disruptors. Most of these environmental chemicals have never been tested for their ability to disrupt the endocrine system, in particular, their ability to interact with the estrogen receptor. EPA needs tools to prioritize thousands of chemicals, for instance in the Endocrine Disruptor Screening Program (EDSP). Collaborative Estrogen Receptor Activity Prediction Project (CERAPP) was intended to be a demonstration of the use of predictive computational models on HTS data including ToxCast and Tox21 assays to prioritize a large chemical universe of 32464 unique structures for one specific molecular target – the estrogen receptor. CERAPP combined multiple computational models for prediction of estrogen receptor activity, and used the predicted results to build a unique consensus model. Models were developed in collaboration between 17 groups in the U.S. and Europe and applied to predict the common set of chemicals. Structure-based techniques such as docking and several QSAR modeling approaches were employed, mostly using a common training set of 1677 compounds provided by U.S. EPA, to build a total of 42 classification models and 8 regression models for binding, agonist and antagonist activity. All predictions were evaluated on ToxCast data and on an exte

  6. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  7. Modelling of the transfer of radiocaesium from deposition to lake ecosystems. Report of the VAMP aquatic working group. Part of the IAEA/CEC co-ordinated research programme on the validation of environmental model predictions (VAMP)

    International Nuclear Information System (INIS)

    2000-03-01

    The environmental impact of releases of radionuclides from nuclear installations can be predicted using assessment models. For such assessments information on their reliability must be provided. Ideally models should be developed and tested using actual data on the transfer of the nuclides which are site specific for the environment being modelled. In the past, generic data have often been taken from environmental contamination that resulted from the fallout from the nuclear weapons testing in the 1950s and 1960s or from laboratory experiments. However, it has always been recognized that there may be differences in the physico-chemical form of the radionuclides from these sources as compared to those that could be released from nuclear installations. Furthermore, weapons fallout was spread over time; it did not provide a single pulse which is generally used in testing models that predict time dependence. On the other hand, the Chernobyl accident resulted in a single pulse, which was detected and measured in a variety of environments throughout Europe. The acquisition of these new data sets justified the establishment of an international programme aimed at collating data from different IAEA Member States and at co-ordinating work on new model testing studies. The IAEA established a Co-ordinated Research Programme (CRP) on 'Validation of Environmental Model Predictions' (VAMP). The principal objectives of the VAMP Co-ordinated Research Programme were: (a) To facilitate the validation of assessment models for radionuclide transfer in the terrestrial, aquatic and urban environments. It is envisaged that this will be achieved by acquiring suitable sets of environmental data from the results of the national research and monitoring programmes established following the Chernobyl release. (b) To guide, if necessary, environmental research and monitoring efforts to acquire data for the validation of models used to assess the most significant radiological exposure pathways

  8. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  9. Preclinical models used for immunogenicity prediction of therapeutic proteins.

    Science.gov (United States)

    Brinks, Vera; Weinbuch, Daniel; Baker, Matthew; Dean, Yann; Stas, Philippe; Kostense, Stefan; Rup, Bonita; Jiskoot, Wim

    2013-07-01

    All therapeutic proteins are potentially immunogenic. Antibodies formed against these drugs can decrease efficacy, leading to drastically increased therapeutic costs and in rare cases to serious and sometimes life threatening side-effects. Many efforts are therefore undertaken to develop therapeutic proteins with minimal immunogenicity. For this, immunogenicity prediction of candidate drugs during early drug development is essential. Several in silico, in vitro and in vivo models are used to predict immunogenicity of drug leads, to modify potentially immunogenic properties and to continue development of drug candidates with expected low immunogenicity. Despite the extensive use of these predictive models, their actual predictive value varies. Important reasons for this uncertainty are the limited/insufficient knowledge on the immune mechanisms underlying immunogenicity of therapeutic proteins, the fact that different predictive models explore different components of the immune system and the lack of an integrated clinical validation. In this review, we discuss the predictive models in use, summarize aspects of immunogenicity that these models predict and explore the merits and the limitations of each of the models.

  10. Validation of model predictions of pore-scale fluid distributions during two-phase flow

    Science.gov (United States)

    Bultreys, Tom; Lin, Qingyang; Gao, Ying; Raeini, Ali Q.; AlRatrout, Ahmed; Bijeljic, Branko; Blunt, Martin J.

    2018-05-01

    Pore-scale two-phase flow modeling is an important technology to study a rock's relative permeability behavior. To investigate if these models are predictive, the calculated pore-scale fluid distributions which determine the relative permeability need to be validated. In this work, we introduce a methodology to quantitatively compare models to experimental fluid distributions in flow experiments visualized with microcomputed tomography. First, we analyzed five repeated drainage-imbibition experiments on a single sample. In these experiments, the exact fluid distributions were not fully repeatable on a pore-by-pore basis, while the global properties of the fluid distribution were. Then two fractional flow experiments were used to validate a quasistatic pore network model. The model correctly predicted the fluid present in more than 75% of pores and throats in drainage and imbibition. To quantify what this means for the relevant global properties of the fluid distribution, we compare the main flow paths and the connectivity across the different pore sizes in the modeled and experimental fluid distributions. These essential topology characteristics matched well for drainage simulations, but not for imbibition. This suggests that the pore-filling rules in the network model we used need to be improved to make reliable predictions of imbibition. The presented analysis illustrates the potential of our methodology to systematically and robustly test two-phase flow models to aid in model development and calibration.

  11. Modelling of the radiological impact of radioactive waste dumping in the Arctic Seas. Report of the Modelling and Assessment Working Group of the International Arctic Seas Assessment Project (IASAP)

    International Nuclear Information System (INIS)

    2003-01-01

    The work is summarized carried out by the Modelling and Assessment Working Group in 1994-1996. The Modelling and Assessment Working Group was established within the framework of the International Arctic Seas Assessment Project (IASAP) launched by the IAEA in 1993 with the objectives of modelling the environmental dispersal and transport of nuclides to be potentially released from the dumped objects and of assessing the associated radiological impact on man and biota. Models were developed to model the dispersal of the pollutants and for the assessment of the radiological consequences of the releases from the dumped wastes in the Arctic. The results of the model intercomparison exercise were used as a basis on which to evaluate the estimate of concentration fields when detailed source term scenarios were used and also to assess the uncertainties in ensuing dose calculations. The descriptions and modelling work was divided into three main phases: description of the area, collection of relevant and necessary information; extension to and development of predictive models including an extensive model inter-comparison and finally prediction of radiological impact, used in the evaluation of the need and options for remediation

  12. Coupling of EIT with computational lung modeling for predicting patient-specific ventilatory responses.

    Science.gov (United States)

    Roth, Christian J; Becher, Tobias; Frerichs, Inéz; Weiler, Norbert; Wall, Wolfgang A

    2017-04-01

    Providing optimal personalized mechanical ventilation for patients with acute or chronic respiratory failure is still a challenge within a clinical setting for each case anew. In this article, we integrate electrical impedance tomography (EIT) monitoring into a powerful patient-specific computational lung model to create an approach for personalizing protective ventilatory treatment. The underlying computational lung model is based on a single computed tomography scan and able to predict global airflow quantities, as well as local tissue aeration and strains for any ventilation maneuver. For validation, a novel "virtual EIT" module is added to our computational lung model, allowing to simulate EIT images based on the patient's thorax geometry and the results of our numerically predicted tissue aeration. Clinically measured EIT images are not used to calibrate the computational model. Thus they provide an independent method to validate the computational predictions at high temporal resolution. The performance of this coupling approach has been tested in an example patient with acute respiratory distress syndrome. The method shows good agreement between computationally predicted and clinically measured airflow data and EIT images. These results imply that the proposed framework can be used for numerical prediction of patient-specific responses to certain therapeutic measures before applying them to an actual patient. In the long run, definition of patient-specific optimal ventilation protocols might be assisted by computational modeling. NEW & NOTEWORTHY In this work, we present a patient-specific computational lung model that is able to predict global and local ventilatory quantities for a given patient and any selected ventilation protocol. For the first time, such a predictive lung model is equipped with a virtual electrical impedance tomography module allowing real-time validation of the computed results with the patient measurements. First promising results

  13. Development and validation of a prediction model for long-term sickness absence based on occupational health survey variables

    DEFF Research Database (Denmark)

    Roelen, Corné; Thorsen, Sannie; Heymans, Martijn

    2018-01-01

    LTSA during follow-up. Results: The 15-predictor model was reduced to a 9-predictor model including age, gender, education, self-rated health, mental health, prior LTSA, work ability, emotional job demands, and recognition by the management. Discrimination by the 9-predictor model was significant (AUC...... population. Implications for rehabilitation Long-term sickness absence risk predictions would enable healthcare providers to refer high-risk employees to rehabilitation programs aimed at preventing or reducing work disability. A prediction model based on health survey variables discriminates between...... employees at high and low risk of long-term sickness absence, but discrimination was not practically useful. Health survey variables provide insufficient information to determine long-term sickness absence risk profiles. There is a need for new variables, based on the knowledge and experience...

  14. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior...... and predictive inferences under different reasonable choices of prior distribution in sensitivity analysis have been presented....

  15. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  16. Decision Making in Reference to Model of Marketing Predictive Analytics – Theory and Practice

    Directory of Open Access Journals (Sweden)

    Piotr Tarka

    2014-03-01

    Full Text Available Purpose: The objective of this paper is to describe concepts and assumptions of predictive marketing analytics in reference to decision making. In particular, we highlight issues pertaining to the importance of data and the modern approach to data analysis and processing with the purpose of solving real marketing problems that companies encounter in business. Methodology: In this paper authors provide two study cases showing how, and to what extent predictive marketing analytics work can be useful in practice e.g., investigation of the marketing environment. The two cases are based on organizations operating mainly on Web site domain. The fi rst part of this article, begins a discussion with the explanation of a general idea of predictive marketing analytics. The second part runs through opportunities it creates for companies in the process of building strong competitive advantage in the market. The paper article ends with a brief comparison of predictive analytics versus traditional marketing-mix analysis. Findings: Analytics play an extremely important role in the current process of business management based on planning, organizing, implementing and controlling marketing activities. Predictive analytics provides the actual and current picture of the external environment. They also explain what problems are faced with the company in business activities. Analytics tailor marketing solutions to the right time and place at minimum costs. In fact they control the effi ciency and simultaneously increases the effectiveness of the firm. Practical implications: Based on the study cases comparing two enterprises carrying business activities in different areas, one can say that predictive analytics has far more been embraces extensively than classical marketing-mix analyses. The predictive approach yields greater speed of data collection and analysis, stronger predictive accuracy, better obtained competitor data, and more transparent models where one can

  17. Prediction of hourly solar radiation with multi-model framework

    International Nuclear Information System (INIS)

    Wu, Ji; Chan, Chee Keong

    2013-01-01

    Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others

  18. Revised predictive equations for salt intrusion modelling in estuaries

    NARCIS (Netherlands)

    Gisen, J.I.A.; Savenije, H.H.G.; Nijzink, R.C.

    2015-01-01

    For one-dimensional salt intrusion models to be predictive, we need predictive equations to link model parameters to observable hydraulic and geometric variables. The one-dimensional model of Savenije (1993b) made use of predictive equations for the Van der Burgh coefficient $K$ and the dispersion

  19. Preprocedural Prediction Model for Contrast-Induced Nephropathy Patients.

    Science.gov (United States)

    Yin, Wen-Jun; Yi, Yi-Hu; Guan, Xiao-Feng; Zhou, Ling-Yun; Wang, Jiang-Lin; Li, Dai-Yang; Zuo, Xiao-Cong

    2017-02-03

    Several models have been developed for prediction of contrast-induced nephropathy (CIN); however, they only contain patients receiving intra-arterial contrast media for coronary angiographic procedures, which represent a small proportion of all contrast procedures. In addition, most of them evaluate radiological interventional procedure-related variables. So it is necessary for us to develop a model for prediction of CIN before radiological procedures among patients administered contrast media. A total of 8800 patients undergoing contrast administration were randomly assigned in a 4:1 ratio to development and validation data sets. CIN was defined as an increase of 25% and/or 0.5 mg/dL in serum creatinine within 72 hours above the baseline value. Preprocedural clinical variables were used to develop the prediction model from the training data set by the machine learning method of random forest, and 5-fold cross-validation was used to evaluate the prediction accuracies of the model. Finally we tested this model in the validation data set. The incidence of CIN was 13.38%. We built a prediction model with 13 preprocedural variables selected from 83 variables. The model obtained an area under the receiver-operating characteristic (ROC) curve (AUC) of 0.907 and gave prediction accuracy of 80.8%, sensitivity of 82.7%, specificity of 78.8%, and Matthews correlation coefficient of 61.5%. For the first time, 3 new factors are included in the model: the decreased sodium concentration, the INR value, and the preprocedural glucose level. The newly established model shows excellent predictive ability of CIN development and thereby provides preventative measures for CIN. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  20. Time dependent patient no-show predictive modelling development.

    Science.gov (United States)

    Huang, Yu-Li; Hanauer, David A

    2016-05-09

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.

  1. The role of ability, motivation, and opportunity to work in the transition from work to early retirement--testing and optimizing the Early Retirement Model.

    Science.gov (United States)

    de Wind, Astrid; Geuskens, Goedele A; Ybema, Jan Fekke; Bongers, Paulien M; van der Beek, Allard J

    2015-01-01

    Determinants in the domains health, job characteristics, skills, and social and financial factors may influence early retirement through three central explanatory variables, namely, the ability, motivation, and opportunity to work. Based on the literature, we created the Early Retirement Model. This study aims to investigate whether data support the model and how it could be improved. Employees aged 58-62 years (N=1862), who participated in the first three waves of the Dutch Study on Transitions in Employment, Ability and Motivation (STREAM) were included. Determinants were assessed at baseline, central explanatory variables after one year, and early retirement after two years. Structural equation modeling was applied. Testing the Early Retirement Model resulted in a model with good fit. Health, job characteristics, skills, and social and financial factors were related to the ability, motivation and/or opportunity to work (significant β range: 0.05-0.31). Lower work ability (β=-0.13) and less opportunity to work (attitude colleagues and supervisor about working until age 65: β=-0.24) predicted early retirement, whereas the motivation to work (work engagement) did not. The model could be improved by adding direct effects of three determinants on early retirement, ie, support of colleagues and supervisor (β=0.14), positive attitude of the partner with respect to early retirement (β=0.15), and not having a partner (β=-0.13). The Early Retirement Model was largely supported by the data but could be improved. The prolongation of working life might be promoted by work-related interventions focusing on health, work ability, the social work climate, social norms on prolonged careers, and the learning environment.

  2. A fully unsteady prescribed wake model for HAWT performance prediction in yawed flow

    Energy Technology Data Exchange (ETDEWEB)

    Coton, F.N.; Tongguang, Wang; Galbraith, R.A.M.; Lee, D. [Univ. of Glasgow (United Kingdom)

    1997-12-31

    This paper describes the development of a fast, accurate, aerodynamic prediction scheme for yawed flow on horizontal axis wind turbines (HAWTs). The method is a fully unsteady three-dimensional model which has been developed over several years and is still being enhanced in a number of key areas. The paper illustrates the current ability of the method by comparison with field data from the NREL combined experiment and also describes the developmental work in progress. In particular, an experimental test programme designed to yield quantitative wake convection information is summarised together with modifications to the numerical model which are necessary for meaningful comparison with the experiments. Finally, current and future work on aspects such as tower-shadow and improved unsteady aerodynamic modelling are discussed.

  3. Model predictive control using fuzzy decision functions

    NARCIS (Netherlands)

    Kaymak, U.; Costa Sousa, da J.M.

    2001-01-01

    Fuzzy predictive control integrates conventional model predictive control with techniques from fuzzy multicriteria decision making, translating the goals and the constraints to predictive control in a transparent way. The information regarding the (fuzzy) goals and the (fuzzy) constraints of the

  4. Predicting and Modelling of Survival Data when Cox's Regression Model does not hold

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...

  5. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  6. Incorporating a prediction of postgrazing herbage mass into a whole-farm model for pasture-based dairy systems.

    Science.gov (United States)

    Gregorini, P; Galli, J; Romera, A J; Levy, G; Macdonald, K A; Fernandez, H H; Beukes, P C

    2014-07-01

    The DairyNZ whole-farm model (WFM; DairyNZ, Hamilton, New Zealand) consists of a framework that links component models for animal, pastures, crops, and soils. The model was developed to assist with analysis and design of pasture-based farm systems. New (this work) and revised (e.g., cow, pasture, crops) component models can be added to the WFM, keeping the model flexible and up to date. Nevertheless, the WFM does not account for plant-animal relationships determining herbage-depletion dynamics. The user has to preset the maximum allowable level of herbage depletion [i.e., postgrazing herbage mass (residuals)] throughout the year. Because residuals have a direct effect on herbage regrowth, the WFM in its current form does not dynamically simulate the effect of grazing pressure on herbage depletion and consequent effect on herbage regrowth. The management of grazing pressure is a key component of pasture-based dairy systems. Thus, the main objective of the present work was to develop a new version of the WFM able to predict residuals, and thereby simulate related effects of grazing pressure dynamically at the farm scale. This objective was accomplished by incorporating a new component model into the WFM. This model represents plant-animal relationships, for example sward structure and herbage intake rate, and resulting level of herbage depletion. The sensitivity of the new version of the WFM was evaluated and then the new WFM was tested against an experimental data set previously used to evaluate the WFM and to illustrate the adequacy and improvement of the model development. Key outputs variables of the new version pertinent to this work (milk production, herbage dry matter intake, intake rate, harvesting efficiency, and residuals) responded acceptably to a range of input variables. The relative prediction errors for monthly and mean annual residual predictions were 20 and 5%, respectively. Monthly predictions of residuals had a line bias (1.5%), with a proportion

  7. Word Memory Test Predicts Recovery in Claimants With Work-Related Head Injury.

    Science.gov (United States)

    Colangelo, Annette; Abada, Abigail; Haws, Calvin; Park, Joanne; Niemeläinen, Riikka; Gross, Douglas P

    2016-05-01

    To investigate the predictive validity of the Word Memory Test (WMT), a verbal memory neuropsychological test developed as a performance validity measure to assess memory, effort, and performance consistency. Cohort study with 1-year follow-up. Workers' compensation rehabilitation facility. Participants included workers' compensation claimants with work-related head injury (N=188; mean age, 44y; 161 men [85.6%]). Not applicable. Outcome measures for determining predictive validity included days to suspension of wage replacement benefits during the 1-year follow-up and work status at discharge in claimants undergoing rehabilitation. Analysis included multivariable Cox and logistic regression. Better WMT performance was significantly but weakly correlated with younger age (r=-.30), documented brain abnormality (r=.28), and loss of consciousness at the time of injury (r=.25). Claimants with documented brain abnormalities on diagnostic imaging scans performed better (∼9%) on the WMT than those without brain abnormalities. The WMT predicted days receiving benefits (adjusted hazard ratio, 1.13; 95% confidence interval, 1.04-1.24) and work status outcome at program discharge (adjusted odds ratio, 1.62; 95% confidence interval, 1.13-2.34). Our results provide evidence for the predictive validity of the WMT in workers' compensation claimants. Younger claimants and those with more severe brain injuries performed better on the WMT. It may be that financial incentives or other factors related to the compensation claim affected the performance. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  8. PK/DB: database for pharmacokinetic properties and predictive in silico ADME models.

    Science.gov (United States)

    Moda, Tiago L; Torres, Leonardo G; Carrara, Alexandre E; Andricopulo, Adriano D

    2008-10-01

    The study of pharmacokinetic properties (PK) is of great importance in drug discovery and development. In the present work, PK/DB (a new freely available database for PK) was designed with the aim of creating robust databases for pharmacokinetic studies and in silico absorption, distribution, metabolism and excretion (ADME) prediction. Comprehensive, web-based and easy to access, PK/DB manages 1203 compounds which represent 2973 pharmacokinetic measurements, including five models for in silico ADME prediction (human intestinal absorption, human oral bioavailability, plasma protein binding, blood-brain barrier and water solubility). http://www.pkdb.ifsc.usp.br

  9. Predictive model for local scour downstream of hydrokinetic turbines in erodible channels

    Science.gov (United States)

    Musa, Mirko; Heisel, Michael; Guala, Michele

    2018-02-01

    A modeling framework is derived to predict the scour induced by marine hydrokinetic turbines installed on fluvial or tidal erodible bed surfaces. Following recent advances in bridge scour formulation, the phenomenological theory of turbulence is applied to describe the flow structures that dictate the equilibrium scour depth condition at the turbine base. Using scaling arguments, we link the turbine operating conditions to the flow structures and scour depth through the drag force exerted by the device on the flow. The resulting theoretical model predicts scour depth using dimensionless parameters and considers two potential scenarios depending on the proximity of the turbine rotor to the erodible bed. The model is validated at the laboratory scale with experimental data comprising the two sediment mobility regimes (clear water and live bed), different turbine configurations, hydraulic settings, bed material compositions, and migrating bedform types. The present work provides future developers of flow energy conversion technologies with a physics-based predictive formula for local scour depth beneficial to feasibility studies and anchoring system design. A potential prototype-scale deployment in a large sandy river is also considered with our model to quantify how the expected scour depth varies as a function of the flow discharge and rotor diameter.

  10. Detection of Adverse Reaction to Drugs in Elderly Patients through Predictive Modeling

    Directory of Open Access Journals (Sweden)

    Rafael San-Miguel Carrasco

    2016-03-01

    Full Text Available Geriatrics Medicine constitutes a clinical research field in which data analytics, particularly predictive modeling, can deliver compelling, reliable and long-lasting benefits, as well as non-intuitive clinical insights and net new knowledge. The research work described in this paper leverages predictive modeling to uncover new insights related to adverse reaction to drugs in elderly patients. The differentiation factor that sets this research exercise apart from traditional clinical research is the fact that it was not designed by formulating a particular hypothesis to be validated. Instead, it was data-centric, with data being mined to discover relationships or correlations among variables. Regression techniques were systematically applied to data through multiple iterations and under different configurations. The obtained results after the process was completed are explained and discussed next.

  11. Work-family conflict as a mediator of the work stress - mental health relationship

    OpenAIRE

    Poelmans, Steven

    2001-01-01

    The relationship between work stressors and mental health outcomes has been demonstrated in a whole range of work stress models and studies. But less has been written about factors outside the work setting that might predict or moderate the relationship between work stressors and strain. In this exploratory study, we suggest a model linking work stressors and "time-based" work-family conflict (TWFC) with mental health, with the intention to contribute to the refinement of the traditional work...

  12. Activity Prediction of Schiff Base Compounds using Improved QSAR Models of Cinnamaldehyde Analogues and Derivatives

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2015-10-01

    Full Text Available In past work, QSAR (quantitative structure-activity relationship models of cinnamaldehyde analogues and derivatives (CADs have been used to predict the activities of new chemicals based on their mass concentrations, but these approaches are not without shortcomings. Therefore, molar concentrations were used instead of mass concentrations to determine antifungal activity. New QSAR models of CADs against Aspergillus niger and Penicillium citrinum were established, and the molecular design of new CADs was performed. The antifungal properties of the designed CADs were tested, and the experimental Log AR values were in agreement with the predicted Log AR values. The results indicate that the improved QSAR models are more reliable and can be effectively used for CADs molecular design and prediction of the activity of CADs. These findings provide new insight into the development and utilization of cinnamaldehyde compounds.

  13. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  14. Prediction of transient maximum heat flux based on a simple liquid layer evaporation model

    International Nuclear Information System (INIS)

    Serizawa, A.; Kataoka, I.

    1981-01-01

    A model of liquid layer evaporation with considerable supply of liquid has been formulated to predict burnout characteristics (maximum heat flux, life, etc.) during an increase of the power. The analytical description of the model is built upon the visual and photographic observations of the boiling configuration at near peak heat flux reported by other investigators. The prediction compares very favourably with water data presently available. It is suggested from the work reported here that the maximum heat flux occurs because of a balance between the consumption of the liquid film on the heated surface and the supply of liquid. Thickness of the liquid film is also very important. (author)

  15. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  16. Predictive analytics technology review: Similarity-based modeling and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, James; Doan, Don; Gandhi, Devang; Nieman, Bill

    2010-09-15

    Over 11 years ago, SmartSignal introduced Predictive Analytics for eliminating equipment failures, using its patented SBM technology. SmartSignal continues to lead and dominate the market and, in 2010, went one step further and introduced Predictive Diagnostics. Now, SmartSignal is combining Predictive Diagnostics with RCM methodology and industry expertise. FMEA logic reengineers maintenance work management, eliminates unneeded inspections, and focuses efforts on the real issues. This integrated solution significantly lowers maintenance costs, protects against critical asset failures, and improves commercial availability, and reduces work orders 20-40%. Learn how.

  17. Driving-behavior-aware stochastic model predictive control for plug-in hybrid electric buses

    International Nuclear Information System (INIS)

    Li, Liang; You, Sixiong; Yang, Chao; Yan, Bingjie; Song, Jian; Chen, Zheng

    2016-01-01

    Highlights: • The novel approximated global optimal energy management strategy has been proposed for hybrid powertrains. • Eight typical driving behaviors have been classified with K-means to deal with the multiplicative traffic conditions. • The stochastic driver models of different driving behaviors were established based on the Markov chains. • ECMS was used to modify the SMPC-based energy management strategy to improve its fuel economy. • The approximated global optimal energy management strategy for plug-in hybrid electric buses has been verified and analyzed. - Abstract: Driving cycles of a city bus is statistically characterized by some repetitive features, which makes the predictive energy management strategy very desirable to obtain approximate optimal fuel economy of a plug-in hybrid electric bus. But dealing with the complicated traffic conditions and finding an approximated global optimal strategy which is applicable to the plug-in hybrid electric bus still remains a challenging technique. To solve this problem, a novel driving-behavior-aware modified stochastic model predictive control method is proposed for the plug-in hybrid electric bus. Firstly, the K-means is employed to classify driving behaviors, and the driver models based on Markov chains is obtained under different kinds of driving behaviors. While the obtained driver behaviors are regarded as stochastic disturbance inputs, the local minimum fuel consumption might be obtained with a traditional stochastic model predictive control at each step, taking tracking the reference battery state of charge trajectory into consideration in the finite predictive horizons. However, this technique is still accompanied by some working points with reduced/worsened fuel economy. Thus, the stochastic model predictive control is modified with the equivalent consumption minimization strategy to eliminate these undesirable working points. The results in real-world city bus routines show that the

  18. Hologram QSAR model for the prediction of human oral bioavailability.

    Science.gov (United States)

    Moda, Tiago L; Montanari, Carlos A; Andricopulo, Adriano D

    2007-12-15

    A drug intended for use in humans should have an ideal balance of pharmacokinetics and safety, as well as potency and selectivity. Unfavorable pharmacokinetics can negatively affect the clinical development of many otherwise promising drug candidates. A variety of in silico ADME (absorption, distribution, metabolism, and excretion) models are receiving increased attention due to a better appreciation that pharmacokinetic properties should be considered in early phases of the drug discovery process. Human oral bioavailability is an important pharmacokinetic property, which is directly related to the amount of drug available in the systemic circulation to exert pharmacological and therapeutic effects. In the present work, hologram quantitative structure-activity relationships (HQSAR) were performed on a training set of 250 structurally diverse molecules with known human oral bioavailability. The most significant HQSAR model (q(2)=0.70, r(2)=0.93) was obtained using atoms, bond, connection, and chirality as fragment distinction. The predictive ability of the model was evaluated by an external test set containing 52 molecules not included in the training set, and the predicted values were in good agreement with the experimental values. The HQSAR model should be useful for the design of new drug candidates having increased bioavailability as well as in the process of chemical library design, virtual screening, and high-throughput screening.

  19. M5 model tree based predictive modeling of road accidents on non-urban sections of highways in India.

    Science.gov (United States)

    Singh, Gyanendra; Sachdeva, S N; Pal, Mahesh

    2016-11-01

    This work examines the application of M5 model tree and conventionally used fixed/random effect negative binomial (FENB/RENB) regression models for accident prediction on non-urban sections of highway in Haryana (India). Road accident data for a period of 2-6 years on different sections of 8 National and State Highways in Haryana was collected from police records. Data related to road geometry, traffic and road environment related variables was collected through field studies. Total two hundred and twenty two data points were gathered by dividing highways into sections with certain uniform geometric characteristics. For prediction of accident frequencies using fifteen input parameters, two modeling approaches: FENB/RENB regression and M5 model tree were used. Results suggest that both models perform comparably well in terms of correlation coefficient and root mean square error values. M5 model tree provides simple linear equations that are easy to interpret and provide better insight, indicating that this approach can effectively be used as an alternative to RENB approach if the sole purpose is to predict motor vehicle crashes. Sensitivity analysis using M5 model tree also suggests that its results reflect the physical conditions. Both models clearly indicate that to improve safety on Indian highways minor accesses to the highways need to be properly designed and controlled, the service roads to be made functional and dispersion of speeds is to be brought down. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  1. Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures

    Science.gov (United States)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.

    2012-12-01

    Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).

  2. Status of molten fuel coolant interaction studies and theoretical modelling work at IGCAR

    International Nuclear Information System (INIS)

    Rao, P.B.; Singh, Om Pal; Singh, R.S.

    1994-01-01

    The status of Molten Fuel Coolant Interaction (MFCI) studies is reviewed and some of the important observations made are presented. A new model for MFCI that is developed at IGCAR by considering the various mechanisms in detail is described. The model is validated and compared with the available experimental data and theoretical work at different stages of its development. Several parametric studies that are carried using this model are described. The predictions from this model have been found to be satisfactory, considering the complexity of the MFCI. A need for more comprehensive and MFCI-specific experimental tests is brought out. (author)

  3. Validation and uncertainty analysis of a pre-treatment 2D dose prediction model

    Science.gov (United States)

    Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank

    2018-02-01

    Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.

  4. Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua

    2014-01-01

    the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...... is analytically intractable. By considering the concave property of the multivariate inverse beta function, we introduce an upper-bound to the true predictive distribution. As the global minimum of this upper-bound exists, the problem is reduced to seek an approximation to the true predictive distribution...

  5. Elements of a pragmatic approach for dealing with bias and uncertainty in experiments through predictions : experiment design and data conditioning; %22real space%22 model validation and conditioning; hierarchical modeling and extrapolative prediction.

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente Jose

    2011-11-01

    This report explores some important considerations in devising a practical and consistent framework and methodology for utilizing experiments and experimental data to support modeling and prediction. A pragmatic and versatile 'Real Space' approach is outlined for confronting experimental and modeling bias and uncertainty to mitigate risk in modeling and prediction. The elements of experiment design and data analysis, data conditioning, model conditioning, model validation, hierarchical modeling, and extrapolative prediction under uncertainty are examined. An appreciation can be gained for the constraints and difficulties at play in devising a viable end-to-end methodology. Rationale is given for the various choices underlying the Real Space end-to-end approach. The approach adopts and refines some elements and constructs from the literature and adds pivotal new elements and constructs. Crucially, the approach reflects a pragmatism and versatility derived from working many industrial-scale problems involving complex physics and constitutive models, steady-state and time-varying nonlinear behavior and boundary conditions, and various types of uncertainty in experiments and models. The framework benefits from a broad exposure to integrated experimental and modeling activities in the areas of heat transfer, solid and structural mechanics, irradiated electronics, and combustion in fluids and solids.

  6. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  7. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  8. Using Pareto points for model identification in predictive toxicology

    Science.gov (United States)

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  9. Modelling techniques for predicting the long term consequences of radiation on natural aquatic populations

    International Nuclear Information System (INIS)

    Wallis, I.G.

    1978-01-01

    The purpose of this working paper is to describe modelling techniques for predicting the long term consequences of radiation on natural aquatic populations. Ideally, it would be possible to use aquatic population models: (1) to predict changes in the health and well-being of all aquatic populations as a result of changing the composition, amount and location of radionuclide discharges; (2) to compare the effects of steady, fluctuating and accidental releases of radionuclides; and (3) to evaluate the combined impact of the discharge of radionuclides and other wastes, and natural environmental stresses on aquatic populations. At the onset it should be stated that there is no existing model which can achieve this ideal performance. However, modelling skills and techniques are available to develop useful aquatic population models. This paper discusses the considerations involved in developing these models and briefly describes the various types of population models which have been developed to date

  10. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  11. Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction

    Science.gov (United States)

    Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro

    Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.

  12. An Analysis and Implementation of the Hidden Markov Model to Technology Stock Prediction

    Directory of Open Access Journals (Sweden)

    Nguyet Nguyen

    2017-11-01

    Full Text Available Future stock prices depend on many internal and external factors that are not easy to evaluate. In this paper, we use the Hidden Markov Model, (HMM, to predict a daily stock price of three active trading stocks: Apple, Google, and Facebook, based on their historical data. We first use the Akaike information criterion (AIC and Bayesian information criterion (BIC to choose the numbers of states from HMM. We then use the models to predict close prices of these three stocks using both single observation data and multiple observation data. Finally, we use the predictions as signals for trading these stocks. The criteria tests’ results showed that HMM with two states worked the best among two, three and four states for the three stocks. Our results also demonstrate that the HMM outperformed the naïve method in forecasting stock prices. The results also showed that active traders using HMM got a higher return than using the naïve forecast for Facebook and Google stocks. The stock price prediction method has a significant impact on stock trading and derivative hedging.

  13. Comparison between model-predicted tumor oxygenation dynamics and vascular-/flow-related Doppler indices.

    Science.gov (United States)

    Belfatto, Antonella; Vidal Urbinati, Ailyn M; Ciardo, Delia; Franchi, Dorella; Cattani, Federica; Lazzari, Roberta; Jereczek-Fossa, Barbara A; Orecchia, Roberto; Baroni, Guido; Cerveri, Pietro

    2017-05-01

    Mathematical modeling is a powerful and flexible method to investigate complex phenomena. It discloses the possibility of reproducing expensive as well as invasive experiments in a safe environment with limited costs. This makes it suitable to mimic tumor evolution and response to radiotherapy although the reliability of the results remains an issue. Complexity reduction is therefore a critical aspect in order to be able to compare model outcomes to clinical data. Among the factors affecting treatment efficacy, tumor oxygenation is known to play a key role in radiotherapy response. In this work, we aim at relating the oxygenation dynamics, predicted by a macroscale model trained on tumor volumetric data of uterine cervical cancer patients, to vascularization and blood flux indices assessed on Ultrasound Doppler images. We propose a macroscale model of tumor evolution based on three dynamics, namely active portion, necrotic portion, and oxygenation. The model parameters were assessed on the volume size of seven cervical cancer patients administered with 28 fractions of intensity modulated radiation therapy (IMRT) (1.8 Gy/fraction). For each patient, five Doppler ultrasound tests were acquired before, during, and after the treatment. The lesion was manually contoured by an expert physician using 4D View ® (General Electric Company - Fairfield, Connecticut, United States), which automatically provided the overall tumor volume size along with three vascularization and/or blood flow indices. Volume data only were fed to the model for training purpose, while the predicted oxygenation was compared a posteriori to the measured Doppler indices. The model was able to fit the tumor volume evolution within 8% error (range: 3-8%). A strong correlation between the intrapatient longitudinal indices from Doppler measurements and oxygen predicted by the model (about 90% or above) was found in three cases. Two patients showed an average correlation value (50-70%) and the remaining

  14. The Culture-Work-Health Model and Work Stress.

    Science.gov (United States)

    Peterson, Michael; Wilson, John F.

    2002-01-01

    Examines the role of organizational culture in the etiology of workplace stress through the framework of the Culture-Work- Health model. A review of relevant business and health literature indicates that culture is an important component of work stress and may be a key to creating effective organizational stress interventions. (SM)

  15. I Like, I Cite? Do Facebook Likes Predict the Impact of Scientific Work?

    Science.gov (United States)

    Ringelhan, Stefanie; Wollersheim, Jutta; Welpe, Isabell M.

    2015-01-01

    Due to the increasing amount of scientific work and the typical delays in publication, promptly assessing the impact of scholarly work is a huge challenge. To meet this challenge, one solution may be to create and discover innovative indicators. The goal of this paper is to investigate whether Facebook likes for unpublished manuscripts that are uploaded to the Internet could be used as an early indicator of the future impact of the scientific work. To address our research question, we compared Facebook likes for manuscripts uploaded to the Harvard Business School website (Study 1) and the bioRxiv website (Study 2) with traditional impact indicators (journal article citations, Impact Factor, Immediacy Index) for those manuscripts that have been published as a journal article. Although based on our full sample of Study 1 (N = 170), Facebook likes do not predict traditional impact indicators, for manuscripts with one or more Facebook likes (n = 95), our results indicate that the more Facebook likes a manuscript receives, the more journal article citations the manuscript receives. In additional analyses (for which we categorized the manuscripts as psychological and non-psychological manuscripts), we found that the significant prediction of citations stems from the psychological and not the non-psychological manuscripts. In Study 2, we observed that Facebook likes (N = 270) and non-zero Facebook likes (n = 84) do not predict traditional impact indicators. Taken together, our findings indicate an interdisciplinary difference in the predictive value of Facebook likes, according to which Facebook likes only predict citations in the psychological area but not in the non-psychological area of business or in the field of life sciences. Our paper contributes to understanding the possibilities and limits of the use of social media indicators as potential early indicators of the impact of scientific work. PMID:26244779

  16. I Like, I Cite? Do Facebook Likes Predict the Impact of Scientific Work?

    Science.gov (United States)

    Ringelhan, Stefanie; Wollersheim, Jutta; Welpe, Isabell M

    2015-01-01

    Due to the increasing amount of scientific work and the typical delays in publication, promptly assessing the impact of scholarly work is a huge challenge. To meet this challenge, one solution may be to create and discover innovative indicators. The goal of this paper is to investigate whether Facebook likes for unpublished manuscripts that are uploaded to the Internet could be used as an early indicator of the future impact of the scientific work. To address our research question, we compared Facebook likes for manuscripts uploaded to the Harvard Business School website (Study 1) and the bioRxiv website (Study 2) with traditional impact indicators (journal article citations, Impact Factor, Immediacy Index) for those manuscripts that have been published as a journal article. Although based on our full sample of Study 1 (N = 170), Facebook likes do not predict traditional impact indicators, for manuscripts with one or more Facebook likes (n = 95), our results indicate that the more Facebook likes a manuscript receives, the more journal article citations the manuscript receives. In additional analyses (for which we categorized the manuscripts as psychological and non-psychological manuscripts), we found that the significant prediction of citations stems from the psychological and not the non-psychological manuscripts. In Study 2, we observed that Facebook likes (N = 270) and non-zero Facebook likes (n = 84) do not predict traditional impact indicators. Taken together, our findings indicate an interdisciplinary difference in the predictive value of Facebook likes, according to which Facebook likes only predict citations in the psychological area but not in the non-psychological area of business or in the field of life sciences. Our paper contributes to understanding the possibilities and limits of the use of social media indicators as potential early indicators of the impact of scientific work.

  17. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  18. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A; Giebel, G; Landberg, L [Risoe National Lab., Roskilde (Denmark); Madsen, H; Nielsen, H A [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  19. Prediction of tectonic stresses and fracture networks with geomechanical reservoir models

    International Nuclear Information System (INIS)

    Henk, A.; Fischer, K.

    2014-09-01

    This project evaluates the potential of geomechanical Finite Element (FE) models for the prediction of in situ stresses and fracture networks in faulted reservoirs. Modeling focuses on spatial variations of the in situ stress distribution resulting from faults and contrasts in mechanical rock properties. In a first methodological part, a workflow is developed for building such geomechanical reservoir models and calibrating them to field data. In the second part, this workflow was applied successfully to an intensively faulted gas reservoir in the North German Basin. A truly field-scale geomechanical model covering more than 400km 2 was built and calibrated. It includes a mechanical stratigraphy as well as a network of 86 faults. The latter are implemented as distinct planes of weakness and allow the fault-specific evaluation of shear and normal stresses. A so-called static model describes the recent state of the reservoir and, thus, after calibration its results reveal the present-day in situ stress distribution. Further geodynamic modeling work considers the major stages in the tectonic history of the reservoir and provides insights in the paleo stress distribution. These results are compared to fracture data and hydraulic fault behavior observed today. The outcome of this project confirms the potential of geomechanical FE models for robust stress and fracture predictions. The workflow is generally applicable and can be used for modeling of any stress-sensitive reservoir.

  20. Prediction of tectonic stresses and fracture networks with geomechanical reservoir models

    Energy Technology Data Exchange (ETDEWEB)

    Henk, A.; Fischer, K. [TU Darmstadt (Germany). Inst. fuer Angewandte Geowissenschaften

    2014-09-15

    This project evaluates the potential of geomechanical Finite Element (FE) models for the prediction of in situ stresses and fracture networks in faulted reservoirs. Modeling focuses on spatial variations of the in situ stress distribution resulting from faults and contrasts in mechanical rock properties. In a first methodological part, a workflow is developed for building such geomechanical reservoir models and calibrating them to field data. In the second part, this workflow was applied successfully to an intensively faulted gas reservoir in the North German Basin. A truly field-scale geomechanical model covering more than 400km{sup 2} was built and calibrated. It includes a mechanical stratigraphy as well as a network of 86 faults. The latter are implemented as distinct planes of weakness and allow the fault-specific evaluation of shear and normal stresses. A so-called static model describes the recent state of the reservoir and, thus, after calibration its results reveal the present-day in situ stress distribution. Further geodynamic modeling work considers the major stages in the tectonic history of the reservoir and provides insights in the paleo stress distribution. These results are compared to fracture data and hydraulic fault behavior observed today. The outcome of this project confirms the potential of geomechanical FE models for robust stress and fracture predictions. The workflow is generally applicable and can be used for modeling of any stress-sensitive reservoir.

  1. Development of a Predictive Model for Induction Success of Labour

    Directory of Open Access Journals (Sweden)

    Cristina Pruenza

    2018-03-01

    Full Text Available Induction of the labour process is an extraordinarily common procedure used in some pregnancies. Obstetricians face the need to end a pregnancy, for medical reasons usually (maternal or fetal requirements or less frequently, social (elective inductions for convenience. The success of induction procedure is conditioned by a multitude of maternal and fetal variables that appear before or during pregnancy or birth process, with a low predictive value. The failure of the induction process involves performing a caesarean section. This project arises from the clinical need to resolve a situation of uncertainty that occurs frequently in our clinical practice. Since the weight of clinical variables is not adequately weighted, we consider very interesting to know a priori the possibility of success of induction to dismiss those inductions with high probability of failure, avoiding unnecessary procedures or postponing end if possible. We developed a predictive model of induced labour success as a support tool in clinical decision making. Improve the predictability of a successful induction is one of the current challenges of Obstetrics because of its negative impact. The identification of those patients with high chances of failure, will allow us to offer them better care improving their health outcomes (adverse perinatal outcomes for mother and newborn, costs (medication, hospitalization, qualified staff and patient perceived quality. Therefore a Clinical Decision Support System was developed to give support to the Obstetricians. In this article, we had proposed a robust method to explore and model a source of clinical information with the purpose of obtaining all possible knowledge. Generally, in classification models are difficult to know the contribution that each attribute provides to the model. We had worked in this direction to offer transparency to models that may be considered as black boxes. The positive results obtained from both the

  2. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  3. Short-Term Wind Speed Prediction Using EEMD-LSSVM Model

    Directory of Open Access Journals (Sweden)

    Aiqing Kang

    2017-01-01

    Full Text Available Hybrid Ensemble Empirical Mode Decomposition (EEMD and Least Square Support Vector Machine (LSSVM is proposed to improve short-term wind speed forecasting precision. The EEMD is firstly utilized to decompose the original wind speed time series into a set of subseries. Then the LSSVM models are established to forecast these subseries. Partial autocorrelation function is adopted to analyze the inner relationships between the historical wind speed series in order to determine input variables of LSSVM models for prediction of every subseries. Finally, the superposition principle is employed to sum the predicted values of every subseries as the final wind speed prediction. The performance of hybrid model is evaluated based on six metrics. Compared with LSSVM, Back Propagation Neural Networks (BP, Auto-Regressive Integrated Moving Average (ARIMA, combination of Empirical Mode Decomposition (EMD with LSSVM, and hybrid EEMD with ARIMA models, the wind speed forecasting results show that the proposed hybrid model outperforms these models in terms of six metrics. Furthermore, the scatter diagrams of predicted versus actual wind speed and histograms of prediction errors are presented to verify the superiority of the hybrid model in short-term wind speed prediction.

  4. PREDICTED PERCENTAGE DISSATISFIED (PPD) MODEL ...

    African Journals Online (AJOL)

    HOD

    their low power requirements, are relatively cheap and are environment friendly. ... PREDICTED PERCENTAGE DISSATISFIED MODEL EVALUATION OF EVAPORATIVE COOLING ... The performance of direct evaporative coolers is a.

  5. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  6. Joint modeling of genetically correlated diseases and functional annotations increases accuracy of polygenic risk prediction.

    Directory of Open Access Journals (Sweden)

    Yiming Hu

    2017-06-01

    Full Text Available Accurate prediction of disease risk based on genetic factors is an important goal in human genetics research and precision medicine. Advanced prediction models will lead to more effective disease prevention and treatment strategies. Despite the identification of thousands of disease-associated genetic variants through genome-wide association studies (GWAS in the past decade, accuracy of genetic risk prediction remains moderate for most diseases, which is largely due to the challenges in both identifying all the functionally relevant variants and accurately estimating their effect sizes. In this work, we introduce PleioPred, a principled framework that leverages pleiotropy and functional annotations in genetic risk prediction for complex diseases. PleioPred uses GWAS summary statistics as its input, and jointly models multiple genetically correlated diseases and a variety of external information including linkage disequilibrium and diverse functional annotations to increase the accuracy of risk prediction. Through comprehensive simulations and real data analyses on Crohn's disease, celiac disease and type-II diabetes, we demonstrate that our approach can substantially increase the accuracy of polygenic risk prediction and risk population stratification, i.e. PleioPred can significantly better separate type-II diabetes patients with early and late onset ages, illustrating its potential clinical application. Furthermore, we show that the increment in prediction accuracy is significantly correlated with the genetic correlation between the predicted and jointly modeled diseases.

  7. A framework to practical predictive maintenance modeling for multi-state systems

    International Nuclear Information System (INIS)

    Cher Ming Tan; Raghavan, Nagarajan

    2008-01-01

    A simple practical framework for predictive maintenance (PdM)-based scheduling of multi-state systems (MSS) is developed. The maintenance schedules are derived from a system-perspective using the failure times of the overall system as estimated from its performance degradation trends. The system analyzed in this work is a flow transmission water pipe system. The various factors influencing PdM-based scheduling are identified and their impact on the system reliability and performance are quantitatively studied. The estimated times to replacement of the MSS may also be derived from the developed model. The results of the model simulation demonstrate the significant impact of maintenance quality and the criteria for the call for maintenance (user demand) on the system reliability and mean performance characteristics. A slight improvement in maintenance quality is found to postpone the system replacement time by manifold. The consistency in the quality of maintenance work with minimal variance is also identified as a very important factor that enhances the system's future operational and downtime event predictability. The studies also reveal that in order to reduce the frequency of maintenance actions, it is necessary to lower the minimum user demand from the system if possible, ensuring at the same time that the system still performs its intended function effectively. The model proposed can be utilized to implement a PdM program in the industry with a few modifications to suit the individual industrial systems' needs

  8. [Application of ARIMA model on prediction of malaria incidence].

    Science.gov (United States)

    Jing, Xia; Hua-Xun, Zhang; Wen, Lin; Su-Jian, Pei; Ling-Cong, Sun; Xiao-Rong, Dong; Mu-Min, Cao; Dong-Ni, Wu; Shunxiang, Cai

    2016-01-29

    To predict the incidence of local malaria of Hubei Province applying the Autoregressive Integrated Moving Average model (ARIMA). SPSS 13.0 software was applied to construct the ARIMA model based on the monthly local malaria incidence in Hubei Province from 2004 to 2009. The local malaria incidence data of 2010 were used for model validation and evaluation. The model of ARIMA (1, 1, 1) (1, 1, 0) 12 was tested as relatively the best optimal with the AIC of 76.085 and SBC of 84.395. All the actual incidence data were in the range of 95% CI of predicted value of the model. The prediction effect of the model was acceptable. The ARIMA model could effectively fit and predict the incidence of local malaria of Hubei Province.

  9. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  10. Validating the Performance of the FHWA Work Zone Model Version 1.0: A Case Study Along I-91 in Springfield, Massachusetts

    Science.gov (United States)

    2017-08-01

    Central to the effective design of work zones is being able to understand how drivers behave as they approach and enter a work zone area. States use simulation tools in modeling freeway work zones to predict work zone impacts and to select optimal de...

  11. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  12. Prediction of lithium-ion battery capacity with metabolic grey model

    International Nuclear Information System (INIS)

    Chen, Lin; Lin, Weilong; Li, Junzi; Tian, Binbin; Pan, Haihong

    2016-01-01

    Given the popularity of Lithium-ion batteries in EVs (electric vehicles), predicting the capacity quickly and accurately throughout a battery's full life-time is still a challenging issue for ensuring the reliability of EVs. This paper proposes an approach in predicting the varied capacity with discharge cycles based on metabolic grey theory and consider issues from two perspectives: 1) three metabolic grey models will be presented, including MGM (metabolic grey model), MREGM (metabolic Residual-error grey model), and MMREGM (metabolic Markov-residual-error grey model); 2) the universality of these models will be explored under different conditions (such as various discharge rates and temperatures). Furthermore, the research findings in this paper demonstrate the excellent performance of the prediction depending on the three models; however, the precision of the MREGM model is inferior compared to the others. Therefore, we have obtained the conclusion in which the MGM model and the MMREGM model have excellent performances in predicting the capacity under a variety of load conditions, even using few data points for modeling. Also, the universality of the metabolic grey prediction theory is verified by predicting the capacity of batteries under different discharge rates and different temperatures. - Highlights: • The metabolic mechanism is introduced in a grey system for capacity prediction. • Three metabolic grey models are presented and studied. • The universality of these models under different conditions is assessed. • A few data points are required for predicting the capacity with these models.

  13. Comparison of joint modeling and landmarking for dynamic prediction under an illness-death model.

    Science.gov (United States)

    Suresh, Krithika; Taylor, Jeremy M G; Spratt, Daniel E; Daignault, Stephanie; Tsodikov, Alexander

    2017-11-01

    Dynamic prediction incorporates time-dependent marker information accrued during follow-up to improve personalized survival prediction probabilities. At any follow-up, or "landmark", time, the residual time distribution for an individual, conditional on their updated marker values, can be used to produce a dynamic prediction. To satisfy a consistency condition that links dynamic predictions at different time points, the residual time distribution must follow from a prediction function that models the joint distribution of the marker process and time to failure, such as a joint model. To circumvent the assumptions and computational burden associated with a joint model, approximate methods for dynamic prediction have been proposed. One such method is landmarking, which fits a Cox model at a sequence of landmark times, and thus is not a comprehensive probability model of the marker process and the event time. Considering an illness-death model, we derive the residual time distribution and demonstrate that the structure of the Cox model baseline hazard and covariate effects under the landmarking approach do not have simple form. We suggest some extensions of the landmark Cox model that should provide a better approximation. We compare the performance of the landmark models with joint models using simulation studies and cognitive aging data from the PAQUID study. We examine the predicted probabilities produced under both methods using data from a prostate cancer study, where metastatic clinical failure is a time-dependent covariate for predicting death following radiation therapy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Improving Predictive Modeling in Pediatric Drug Development: Pharmacokinetics, Pharmacodynamics, and Mechanistic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Slikker, William; Young, John F.; Corley, Rick A.; Dorman, David C.; Conolly, Rory B.; Knudsen, Thomas; Erstad, Brian L.; Luecke, Richard H.; Faustman, Elaine M.; Timchalk, Chuck; Mattison, Donald R.

    2005-07-26

    A workshop was conducted on November 18?19, 2004, to address the issue of improving predictive models for drug delivery to developing humans. Although considerable progress has been made for adult humans, large gaps remain for predicting pharmacokinetic/pharmacodynamic (PK/PD) outcome in children because most adult models have not been tested during development. The goals of the meeting included a description of when, during development, infants/children become adultlike in handling drugs. The issue of incorporating the most recent advances into the predictive models was also addressed: both the use of imaging approaches and genomic information were considered. Disease state, as exemplified by obesity, was addressed as a modifier of drug pharmacokinetics and pharmacodynamics during development. Issues addressed in this workshop should be considered in the development of new predictive and mechanistic models of drug kinetics and dynamics in the developing human.

  15. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to

  16. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    Science.gov (United States)

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  17. An improved simplified model predictive control algorithm and its application to a continuous fermenter

    Directory of Open Access Journals (Sweden)

    W. H. Kwong

    2000-06-01

    Full Text Available The development of a new simplified model predictive control algorithm has been proposed in this work. The algorithm is developed within the framework of internal model control, and it is easy to understanding and implement. Simulation results for a continuous fermenter, which show that the proposed control algorithm is robust for moderate variations in plant parameters, are presented. The algorithm shows a good performance for setpoint tracking.

  18. When Theory Meets Data: Comparing Model Predictions Of Hillslope Sediment Size With Field Measurements.

    Science.gov (United States)

    Mahmoudi, M.; Sklar, L. S.; Leclere, S.; Davis, J. D.; Stine, A.

    2017-12-01

    sediment size distributions in landscape evolution models. Overall, this work highlights the need for additional field data sets as well as improved theoretical models, but also demonstrates progress in predicting the size distribution of sediments produced on hillslopes and supplied to channels.

  19. Robust predictions of the interacting boson model

    International Nuclear Information System (INIS)

    Casten, R.F.; Koeln Univ.

    1994-01-01

    While most recognized for its symmetries and algebraic structure, the IBA model has other less-well-known but equally intrinsic properties which give unavoidable, parameter-free predictions. These predictions concern central aspects of low-energy nuclear collective structure. This paper outlines these ''robust'' predictions and compares them with the data

  20. Are performance-based measures predictive of work participation in patients with musculoskeletal disorders? A systematic review.

    Science.gov (United States)

    Kuijer, P P F M; Gouttebarge, V; Brouwer, S; Reneman, M F; Frings-Dresen, M H W

    2012-02-01

    Assessments of whether patients with musculoskeletal disorders (MSDs) can participate in work mainly consist of case history, physical examinations, and self-reports. Performance-based measures might add value in these assessments. This study answers the question: how well do performance-based measures predict work participation in patients with MSDs? A systematic literature search was performed to obtain longitudinal studies that used reliable performance-based measures to predict work participation in patients with MSDs. The following five sources of information were used to retrieve relevant studies: PubMed, Embase, AMA Guide to the Evaluation of Functional Ability, references of the included papers, and the expertise and personal file of the authors. A quality assessment specific for prognostic studies and an evidence synthesis were performed. Of the 1,230 retrieved studies, eighteen fulfilled the inclusion criteria. The studies included 4,113 patients, and the median follow-up period was 12 months. Twelve studies took possible confounders into account. Five studies were of good quality and thirteen of moderate quality. Two good-quality and all thirteen moderate-quality studies (83%) reported that performance-based measures were predictive of work participation. Two good-quality studies (11%) reported both an association and no association between performance-based measures and work participation. One good-quality study (6%) found no effect. A performance-based lifting test was used in fourteen studies and appeared to be predictive of work participation in thirteen studies. Strong evidence exists that a number of performance-based measures are predictive of work participation in patients with MSDs, especially lifting tests. Overall, the explained variance was modest.

  1. Predictive models for pressure ulcers from intensive care unit electronic health records using Bayesian networks.

    Science.gov (United States)

    Kaewprag, Pacharmon; Newton, Cheryl; Vermillion, Brenda; Hyun, Sookyung; Huang, Kun; Machiraju, Raghu

    2017-07-05

    We develop predictive models enabling clinicians to better understand and explore patient clinical data along with risk factors for pressure ulcers in intensive care unit patients from electronic health record data. Identifying accurate risk factors of pressure ulcers is essential to determining appropriate prevention strategies; in this work we examine medication, diagnosis, and traditional Braden pressure ulcer assessment scale measurements as patient features. In order to predict pressure ulcer incidence and better understand the structure of related risk factors, we construct Bayesian networks from patient features. Bayesian network nodes (features) and edges (conditional dependencies) are simplified with statistical network techniques. Upon reviewing a network visualization of our model, our clinician collaborators were able to identify strong relationships between risk factors widely recognized as associated with pressure ulcers. We present a three-stage framework for predictive analysis of patient clinical data: 1) Developing electronic health record feature extraction functions with assistance of clinicians, 2) simplifying features, and 3) building Bayesian network predictive models. We evaluate all combinations of Bayesian network models from different search algorithms, scoring functions, prior structure initializations, and sets of features. From the EHRs of 7,717 ICU patients, we construct Bayesian network predictive models from 86 medication, diagnosis, and Braden scale features. Our model not only identifies known and suspected high PU risk factors, but also substantially increases sensitivity of the prediction - nearly three times higher comparing to logistical regression models - without sacrificing the overall accuracy. We visualize a representative model with which our clinician collaborators identify strong relationships between risk factors widely recognized as associated with pressure ulcers. Given the strong adverse effect of pressure ulcers

  2. Assessment of predictivity of volatile organic compounds carcinogenicity and mutagenicity by freeware in silico models.

    Science.gov (United States)

    Guerra, Lília Ribeiro; de Souza, Alessandra Mendonça Teles; Côrtes, Juliana Alves; Lione, Viviane de Oliveira Freitas; Castro, Helena Carla; Alves, Gutemberg Gomes

    2017-12-01

    The application of in silico methods is increasing on toxicological risk prediction for human and environmental health. This work aimed to evaluate the performance of three in silico freeware models (OSIRIS v.2.0, LAZAR, and Toxtree) on the prediction of carcinogenicity and mutagenicity of thirty-eight volatile organic compounds (VOC) related to chemical risk assessment for occupational exposure. Theoretical data were compared with assessments available in international databases. Confusion matrices and ROC curves were used to evaluate the sensitivity, specificity, and accuracy of each model. All three models (OSIRIS, LAZAR and Toxtree) were able to identify VOC with a potential carcinogenicity or mutagenicity risk for humans, however presenting differences concerning the specificity, sensitivity, and accuracy. The best predictive performances were found for OSIRIS and LAZAR for carcinogenicity and OSIRIS for mutagenicity, as these softwares presented a combination of negative predictive power and lower risk of false positives (high specificity) for those endpoints. The heterogeneity of results found with different softwares reinforce the importance of using a combination of in silico models to occupational toxicological risk assessment. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. A study on the development of advanced models to predict the critical heat flux for water and liquid metals

    International Nuclear Information System (INIS)

    Lee, Yong Bum

    1994-02-01

    The critical heat flux (CHF) phenomenon in the two-phase convective flows has been an important issue in the fields of design and safety analysis of light water reactor (LWR) as well as sodium cooled liquid metal fast breeder reactor (LMFBR). Especially in the LWR application many physical aspects of the CHF phenomenon are understood and reliable correlations and mechanistic models to predict the CHF condition have been proposed. However, there are few correlations and models which are applicable to liquid metals. Compared with water, liquid metals show a divergent picture for boiling pattern. Therefore, the CHF conditions obtained from investigations with water cannot be applied to liquid metals. In this work a mechanistic model to predict the CHF of water and a correlation for liquid metals are developed. First, a mechanistic model to predict the CHF in flow boiling at low quality was developed based on the liquid sublayer dryout mechanism. In this approach the CHF is assumed to occur when a vapor blanket isolates the liquid sublayer from bulk liquid and then the liquid entering the sublayer falls short of balancing the rate of sublayer dryout by vaporization. Therefore, the vapor blanket velocity is the key parameter. In this work the vapor blanket velocity is theoretically determined based on mass, energy, and momentum balance and finally the mechanistic model to predict the CHF in flow boiling at low quality is developed. The accuracy of the present model is evaluated by comparing model predictions with the experimental data and tabular data of look-up tables. The predictions of the present model agree well with extensive CHF data. In the latter part a correlation to predict the CHF for liquid metals is developed based on the flow excursion mechanism. By using Baroczy two-phase frictional pressure drop correlation and Ledinegg instability criterion, the relationship between the CHF of liquid metals and the principal parameters is derived and finally the

  4. Modelling the exposure of wildlife to radiation: key findings and activities of IAEA working groups

    Energy Technology Data Exchange (ETDEWEB)

    Beresford, Nicholas A. [NERC Centre for Ecology and Hydrology, Lancaster Environment Center, Library Av., Bailrigg, Lancaster, LA1 4AP (United Kingdom); School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Vives i Batlle, Jordi; Vandenhove, Hildegarde [Belgian Nuclear Research Centre, Belgian Nuclear Research Centre, Boeretang 200, 2400 Mol (Belgium); Beaugelin-Seiller, Karine [Institut de Radioprotection et de Surete Nucleaire (IRSN), PRP-ENV, SERIS, LM2E, Cadarache (France); Johansen, Mathew P. [ANSTO Australian Nuclear Science and Technology Organisation, New Illawarra Rd, Menai, NSW (Australia); Goulet, Richard [Canadian Nuclear Safety Commission, Environmental Risk Assessment Division, 280 Slater, Ottawa, K1A0H3 (Canada); Wood, Michael D. [School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Ruedig, Elizabeth [Department of Environmental and Radiological Health Sciences, Colorado State University, Fort Collins (United States); Stark, Karolina; Bradshaw, Clare [Department of Ecology, Environment and Plant Sciences, Stockholm University, SE-10691 (Sweden); Andersson, Pal [Swedish Radiation Safety Authority, SE-171 16, Stockholm (Sweden); Copplestone, David [Biological and Environmental Sciences, University of Stirling, Stirling, FK9 4LA (United Kingdom); Yankovich, Tamara L.; Fesenko, Sergey [International Atomic Energy Agency, Vienna International Centre, 1400, Vienna (Austria)

    2014-07-01

    In total, participants from 14 countries, representing 19 organisations, actively participated in the model application/inter-comparison activities of the IAEA's EMRAS II programme Biota Modelling Group. A range of models/approaches were used by participants (e.g. the ERICA Tool, RESRAD-BIOTA, the ICRP Framework). The agreed objectives of the group were: 'To improve Member State's capabilities for protection of the environment by comparing and validating models being used, or developed, for biota dose assessment (that may be used) as part of the regulatory process of licensing and compliance monitoring of authorised releases of radionuclides.' The activities of the group, the findings of which will be described, included: - An assessment of the predicted unweighted absorbed dose rates for 74 radionuclides estimated by 10 approaches for five of the ICRPs Reference Animal and Plant geometries assuming 1 Bq per unit organism or media. - Modelling the effect of heterogeneous distributions of radionuclides in sediment profiles on the estimated exposure of organisms. - Model prediction - field data comparisons for freshwater ecosystems in a uranium mining area and a number of wetland environments. - An evaluation of the application of available models to a scenario considering radioactive waste buried in shallow trenches. - Estimating the contribution of {sup 235}U to dose rates in freshwater environments. - Evaluation of the factors contributing to variation in modelling results. The work of the group continues within the framework of the IAEA's MODARIA programme, which was initiated in 2012. The work plan of the MODARIA working group has largely been defined by the findings of the previous EMRAS programme. On-going activities of the working group, which will be described, include the development of a database of dynamic parameters for wildlife dose assessment and exercises involving modelling the exposure of organisms in the marine coastal

  5. Do expectancies of return to work and Job satisfaction predict actual return to work in workers with long lasting LBP?

    Science.gov (United States)

    Opsahl, Jon; Eriksen, Hege R; Tveito, Torill H

    2016-11-17

    Musculoskeletal disorders including low back pain have major individual and socioeconomic consequences as it often leads to disability and long-term sick leave and exclusion from working life. Predictors of disability and return to work often differ, and the dominant knowledge is on predictors for prolonged sick leave and disability. Therefore it is also important to identify key predictors for return to work. The aim of the study was to assess if overall job satisfaction and expectancies of return to work predicts actual return to work after 12 months, among employees with long lasting low back pain, and to assess if there were gender differences in the predictors. Data from the Cognitive interventions and nutritional supplements trial (CINS Trial) was used. Predictors for return to work were examined in 574 employees that had been on sick leave 2-10 months for low back pain, before entering the trial. Data were analysed with multiple logistic regression models stratified by gender, and adjusted for potential confounders. Regardless of gender high expectancies were a strong and significant predictor of return to work at 12 months, while high levels of job satisfaction were not a significant predictor. There were no differences in the levels of expectancies or overall job satisfaction between men and women. However, men had in general higher odds of returning to work compared with women. Among individuals with long lasting low back pain high expectancies of returning to work were strongly associated with successful return to work. We do not know what factors influence individual expectancies of return to work. Screening expectancies and giving individuals with low expectancies interventions with a goal to change expectancies of return to work, such as CBT or self-management interventions, may contribute to increase actual return to work. http://www.clinicaltrials.gov/ , with registration number NCT00463970 . The trial was registered at the 18th of April 2007.

  6. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  7. Effective modelling for predictive analytics in data science ...

    African Journals Online (AJOL)

    Effective modelling for predictive analytics in data science. ... the nearabsence of empirical or factual predictive analytics in the mainstream research going on ... Keywords: Predictive Analytics, Big Data, Business Intelligence, Project Planning.

  8. Modelling the deposition of airborne radionuclides into the urban environment. First report of the VAMP Urban Working Group. Part of the IAEA/CEC co-ordinated research programme on the validation of environmental model predictions (VAMP)

    International Nuclear Information System (INIS)

    1994-08-01

    A co-ordinated research programme was begun at the IAEA in 1988 with the short title of Validation of Environmental Model Predictions (VAMP). The VAMP Urban Working Group aims to examine, by means of expert review combined with formal validation exercises, modelling for the assessment of the radiation exposure of urban populations through the external irradiation and inhalation pathways. An aim of the studies is to evaluate the lessons learned and to document the improvements in modelling capability as a result of experience gained following the Chernobyl accident. This Technical Document, the first report of the Group, addresses the subject of the deposition of airborne radionuclides into the urban environment. It summarizes not only the present status of modelling in this field, but also the results of a limited validation exercise that was performed under the auspices of VAMP. 42 refs, figs and tabs

  9. Statistical and Machine Learning Models to Predict Programming Performance

    OpenAIRE

    Bergin, Susan

    2006-01-01

    This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...

  10. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  11. On the Predictiveness of Single-Field Inflationary Models

    CERN Document Server

    Burgess, C.P.; Trott, Michael

    2014-01-01

    We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for $A_s$, $r$ and $n_s$ are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in prin...

  12. Job stress models, depressive disorders and work performance of engineers in microelectronics industry.

    Science.gov (United States)

    Chen, Sung-Wei; Wang, Po-Chuan; Hsin, Ping-Lung; Oates, Anthony; Sun, I-Wen; Liu, Shen-Ing

    2011-01-01

    Microelectronic engineers are considered valuable human capital contributing significantly toward economic development, but they may encounter stressful work conditions in the context of a globalized industry. The study aims at identifying risk factors of depressive disorders primarily based on job stress models, the Demand-Control-Support and Effort-Reward Imbalance models, and at evaluating whether depressive disorders impair work performance in microelectronics engineers in Taiwan. The case-control study was conducted among 678 microelectronics engineers, 452 controls and 226 cases with depressive disorders which were defined by a score 17 or more on the Beck Depression Inventory and a psychiatrist's diagnosis. The self-administered questionnaires included the Job Content Questionnaire, Effort-Reward Imbalance Questionnaire, demography, psychosocial factors, health behaviors and work performance. Hierarchical logistic regression was applied to identify risk factors of depressive disorders. Multivariate linear regressions were used to determine factors affecting work performance. By hierarchical logistic regression, risk factors of depressive disorders are high demands, low work social support, high effort/reward ratio and low frequency of physical exercise. Combining the two job stress models may have better predictive power for depressive disorders than adopting either model alone. Three multivariate linear regressions provide similar results indicating that depressive disorders are associated with impaired work performance in terms of absence, role limitation and social functioning limitation. The results may provide insight into the applicability of job stress models in a globalized high-tech industry considerably focused in non-Western countries, and the design of workplace preventive strategies for depressive disorders in Asian electronics engineering population.

  13. Predictive modeling of neuroanatomic structures for brain atrophy detection

    Science.gov (United States)

    Hu, Xintao; Guo, Lei; Nie, Jingxin; Li, Kaiming; Liu, Tianming

    2010-03-01

    In this paper, we present an approach of predictive modeling of neuroanatomic structures for the detection of brain atrophy based on cross-sectional MRI image. The underlying premise of applying predictive modeling for atrophy detection is that brain atrophy is defined as significant deviation of part of the anatomy from what the remaining normal anatomy predicts for that part. The steps of predictive modeling are as follows. The central cortical surface under consideration is reconstructed from brain tissue map and Regions of Interests (ROI) on it are predicted from other reliable anatomies. The vertex pair-wise distance between the predicted vertex and the true one within the abnormal region is expected to be larger than that of the vertex in normal brain region. Change of white matter/gray matter ratio within a spherical region is used to identify the direction of vertex displacement. In this way, the severity of brain atrophy can be defined quantitatively by the displacements of those vertices. The proposed predictive modeling method has been evaluated by using both simulated atrophies and MRI images of Alzheimer's disease.

  14. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Science.gov (United States)

    King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin

    2011-01-01

    Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  15. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Directory of Open Access Journals (Sweden)

    Michael King

    Full Text Available Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL for the development of hazardous drinking in safe drinkers.A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women.69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873. The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51. External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846 and Hedge's g of 0.68 (95% CI 0.57, 0.78.The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  16. Seasonal prediction of East Asian summer rainfall using a multi-model ensemble system

    Science.gov (United States)

    Ahn, Joong-Bae; Lee, Doo-Young; Yoo, Jin‑Ho

    2015-04-01

    Using the retrospective forecasts of seven state-of-the-art coupled models and their multi-model ensemble (MME) for boreal summers, the prediction skills of climate models in the western tropical Pacific (WTP) and East Asian region are assessed. The prediction of summer rainfall anomalies in East Asia is difficult, while the WTP has a strong correlation between model prediction and observation. We focus on developing a new approach to further enhance the seasonal prediction skill for summer rainfall in East Asia and investigate the influence of convective activity in the WTP on East Asian summer rainfall. By analyzing the characteristics of the WTP convection, two distinct patterns associated with El Niño-Southern Oscillation developing and decaying modes are identified. Based on the multiple linear regression method, the East Asia Rainfall Index (EARI) is developed by using the interannual variability of the normalized Maritime continent-WTP Indices (MPIs), as potentially useful predictors for rainfall prediction over East Asia, obtained from the above two main patterns. For East Asian summer rainfall, the EARI has superior performance to the East Asia summer monsoon index or each MPI. Therefore, the regressed rainfall from EARI also shows a strong relationship with the observed East Asian summer rainfall pattern. In addition, we evaluate the prediction skill of the East Asia reconstructed rainfall obtained by hybrid dynamical-statistical approach using the cross-validated EARI from the individual models and their MME. The results show that the rainfalls reconstructed from simulations capture the general features of observed precipitation in East Asia quite well. This study convincingly demonstrates that rainfall prediction skill is considerably improved by using a hybrid dynamical-statistical approach compared to the dynamical forecast alone. Acknowledgements This work was carried out with the support of Rural Development Administration Cooperative Research

  17. Prediction Model for Gastric Cancer Incidence in Korean Population.

    Directory of Open Access Journals (Sweden)

    Bang Wool Eom

    Full Text Available Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea.Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope.During a median of 11.4 years of follow-up, 19,465 (1.4% and 5,579 (0.7% newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women.In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.

  18. Technical Work Plan for: Additional Multiscale Thermohydrologic Modeling

    International Nuclear Information System (INIS)

    B. Kirstein

    2006-01-01

    will be evaluated and justified. Some of this evaluation will be conducted in conjunction with the post-model development validation activity involving comparisons of predicted TH conditions with measured TH conditions in the DST. The expected result is that, consistent with what was found in Revision 03 of Multiscale Thermohydrologic Model (BSC 2005 [DIRS 173944], Section 6.3.9), near-field/in-drift TH behavior is insensitive to a wide range of host-rock hydrologic property values. It is the intention of the work described in this TWP to propagate the new infiltration fluxes from the replacement infiltration model, by using the percolation fluxes from the revised site-scale unsaturated zone (UZ) flow model that has applied those new infiltration fluxes. The percolation flux distributions will be obtained from the updated site-scale UZ flow model, which has applied updated infiltration flux maps. Another objective of the work scope is to develop, implement, and validate a revised TH submodel-construction approach. This revised approach utilizes interpolation among a set of generic LDTH submodels that are run for a range of percolation flux histories that cover a sufficiently broad range of infiltration flux uncertainty, as well as for four host-rock units (two lithophysal units and two nonlithophysal units), and for three thermal property sets (low, mean, and high). A key motivation for this revised LDTH submodel-construction approach is to enable the MSTHM to be more flexible in addressing a broad range of infiltration flux cases. This approach allows the generic LDTH submodel simulations to be conducted prior to receiving percolation flux maps

  19. Simple metal model for predicting uptake and chemical processes in sewage-fed aquaculture ecosystem

    DEFF Research Database (Denmark)

    Azanu, David; Jorgensen, Sven Erik; Darko, Godfred

    2016-01-01

    but not working properly for chromium and mercury. Additional processes, including precipitation of chromium and bio-magnification of methylmercury were introduced to explain concentration of chromium and mercury in fish. Comparison of measured and predicted metal concentration used for validation gave a linear......% was the best, which is also in accordance to the fish growth. The ratio of fish food was also calibrated to be 70% due to a food chain in the water and 30% due to a food chain in the sediment. This gave the lowest uncertainty of the model. The simple metal model was working acceptably well for Pb, Cu and Cd...

  20. Comparison of the models of financial distress prediction

    Directory of Open Access Journals (Sweden)

    Jiří Omelka

    2013-01-01

    Full Text Available Prediction of the financial distress is generally supposed as approximation if a business entity is closed on bankruptcy or at least on serious financial problems. Financial distress is defined as such a situation when a company is not able to satisfy its liabilities in any forms, or when its liabilities are higher than its assets. Classification of financial situation of business entities represents a multidisciplinary scientific issue that uses not only the economic theoretical bases but interacts to the statistical, respectively to econometric approaches as well.The first models of financial distress prediction have originated in the sixties of the 20th century. One of the most known is the Altman’s model followed by a range of others which are constructed on more or less conformable bases. In many existing models it is possible to find common elements which could be marked as elementary indicators of potential financial distress of a company. The objective of this article is, based on the comparison of existing models of prediction of financial distress, to define the set of basic indicators of company’s financial distress at conjoined identification of their critical aspects. The sample defined this way will be a background for future research focused on determination of one-dimensional model of financial distress prediction which would subsequently become a basis for construction of multi-dimensional prediction model.

  1. Optimal design of hydraulic excavator working device based on multiple surrogate models

    Directory of Open Access Journals (Sweden)

    Qingying Qiu

    2016-05-01

    Full Text Available The optimal design of hydraulic excavator working device is often characterized by computationally expensive analysis methods such as finite element analysis. Significant difficulties also exist when using a sensitivity-based decomposition approach to such practical engineering problems because explicit mathematical formulas between the objective function and design variables are impossible to formulate. An effective alternative is known as the surrogate model. The purpose of this article is to provide a comparative study on multiple surrogate models, including the response surface methodology, Kriging, radial basis function, and support vector machine, and select the one that best fits the optimization of the working device. In this article, a new modeling strategy based on the combination of the dimension variables between hinge joints and the forces loaded on hinge joints of the working device is proposed. In addition, the extent to which the accuracy of the surrogate models depends on different design variables is presented. The bionic intelligent optimization algorithm is then used to obtain the optimal results, which demonstrate that the maximum stresses calculated by the predicted method and finite element analysis are quite similar, but the efficiency of the former is much higher than that of the latter.

  2. Making eco logic and models work

    NARCIS (Netherlands)

    Kuiper, Jan Jurjen

    2016-01-01

    Dynamical ecosystem models are important tools that can help ecologists understand complex systems, and turn understanding into predictions of how these systems respond to external changes. This thesis revolves around PCLake, an integrated ecosystem model of shallow lakes that is used by both

  3. Development of Artificial Neural Network Model for Diesel Fuel Properties Prediction using Vibrational Spectroscopy.

    Science.gov (United States)

    Bolanča, Tomislav; Marinović, Slavica; Ukić, Sime; Jukić, Ante; Rukavina, Vinko

    2012-06-01

    This paper describes development of artificial neural network models which can be used to correlate and predict diesel fuel properties from several FTIR-ATR absorbances and Raman intensities as input variables. Multilayer feed forward and radial basis function neural networks have been used to rapid and simultaneous prediction of cetane number, cetane index, density, viscosity, distillation temperatures at 10% (T10), 50% (T50) and 90% (T90) recovery, contents of total aromatics and polycyclic aromatic hydrocarbons of commercial diesel fuels. In this study two-phase training procedures for multilayer feed forward networks were applied. While first phase training algorithm was constantly the back propagation one, two second phase training algorithms were varied and compared, namely: conjugate gradient and quasi Newton. In case of radial basis function network, radial layer was trained using K-means radial assignment algorithm and three different radial spread algorithms: explicit, isotropic and K-nearest neighbour. The number of hidden layer neurons and experimental data points used for the training set have been optimized for both neural networks in order to insure good predictive ability by reducing unnecessary experimental work. This work shows that developed artificial neural network models can determine main properties of diesel fuels simultaneously based on a single and fast IR or Raman measurement.

  4. Modelling of Biota Dose Effects. Report of Working Group 6 Biota Dose Effects Modelling of EMRAS II Topical Heading Reference Approaches for Biota Dose Assessment. Environmental Modelling for RAdiation Safety (EMRAS II) Programme

    International Nuclear Information System (INIS)

    2014-07-01

    Environmental assessment models are used for evaluating the radiological impact of actual and potential releases of radionuclides to the environment. They are essential tools for use in the regulatory control of routine discharges to the environment and in planning the measures to be taken in the event of accidental releases. They are also used for predicting the impact of releases which may occur far into the future, for example, from underground radioactive waste repositories. It is important to verify, to the extent possible, the reliability of the predictions of such models by a comparison with measured values in the environment or with the predictions of other models. The IAEA has been organizing programmes on international model testing since the 1980s. These programmes have contributed to a general improvement in models, in the transfer of data and in the capabilities of modellers in Member States. IAEA publications on this subject over the past three decades demonstrate the comprehensive nature of the programmes and record the associated advances which have been made. From 2009 to 2011, the IAEA organized a project entitled Environmental Modelling for RAdiation Safety (EMRAS II), which concentrated on the improvement of environmental transfer models and the development of reference approaches to estimate the radiological impacts on humans, as well as on flora and fauna, arising from radionuclides in the environment. Different aspects were addressed by nine working groups covering three themes: reference approaches for human dose assessment, reference approaches for biota dose assessment and approaches for addressing emergency situations. This publication describes the work of the Biota Effects Modelling Working Group

  5. Modelling of Biota Dose Effects. Report of Working Group 6 Biota Dose Effects Modelling of EMRAS II Topical Heading Reference Approaches for Biota Dose Assessment. Environmental Modelling for RAdiation Safety (EMRAS II) Programme

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2014-07-15

    Environmental assessment models are used for evaluating the radiological impact of actual and potential releases of radionuclides to the environment. They are essential tools for use in the regulatory control of routine discharges to the environment and in planning the measures to be taken in the event of accidental releases. They are also used for predicting the impact of releases which may occur far into the future, for example, from underground radioactive waste repositories. It is important to verify, to the extent possible, the reliability of the predictions of such models by a comparison with measured values in the environment or with the predictions of other models. The IAEA has been organizing programmes on international model testing since the 1980s. These programmes have contributed to a general improvement in models, in the transfer of data and in the capabilities of modellers in Member States. IAEA publications on this subject over the past three decades demonstrate the comprehensive nature of the programmes and record the associated advances which have been made. From 2009 to 2011, the IAEA organized a project entitled Environmental Modelling for RAdiation Safety (EMRAS II), which concentrated on the improvement of environmental transfer models and the development of reference approaches to estimate the radiological impacts on humans, as well as on flora and fauna, arising from radionuclides in the environment. Different aspects were addressed by nine working groups covering three themes: reference approaches for human dose assessment, reference approaches for biota dose assessment and approaches for addressing emergency situations. This publication describes the work of the Biota Effects Modelling Working Group.

  6. A model for predicting lung cancer response to therapy

    International Nuclear Information System (INIS)

    Seibert, Rebecca M.; Ramsey, Chester R.; Hines, J. Wesley; Kupelian, Patrick A.; Langen, Katja M.; Meeks, Sanford L.; Scaperoth, Daniel D.

    2007-01-01

    Purpose: Volumetric computed tomography (CT) images acquired by image-guided radiation therapy (IGRT) systems can be used to measure tumor response over the course of treatment. Predictive adaptive therapy is a novel treatment technique that uses volumetric IGRT data to actively predict the future tumor response to therapy during the first few weeks of IGRT treatment. The goal of this study was to develop and test a model for predicting lung tumor response during IGRT treatment using serial megavoltage CT (MVCT). Methods and Materials: Tumor responses were measured for 20 lung cancer lesions in 17 patients that were imaged and treated with helical tomotherapy with doses ranging from 2.0 to 2.5 Gy per fraction. Five patients were treated with concurrent chemotherapy, and 1 patient was treated with neoadjuvant chemotherapy. Tumor response to treatment was retrospectively measured by contouring 480 serial MVCT images acquired before treatment. A nonparametric, memory-based locally weight regression (LWR) model was developed for predicting tumor response using the retrospective tumor response data. This model predicts future tumor volumes and the associated confidence intervals based on limited observations during the first 2 weeks of treatment. The predictive accuracy of the model was tested using a leave-one-out cross-validation technique with the measured tumor responses. Results: The predictive algorithm was used to compare predicted verse-measured tumor volume response for all 20 lesions. The average error for the predictions of the final tumor volume was 12%, with the true volumes always bounded by the 95% confidence interval. The greatest model uncertainty occurred near the middle of the course of treatment, in which the tumor response relationships were more complex, the model has less information, and the predictors were more varied. The optimal days for measuring the tumor response on the MVCT images were on elapsed Days 1, 2, 5, 9, 11, 12, 17, and 18 during

  7. Predicting the effects of ionising radiation on ecosystems by a generic model based on the Lotka-Volterra equations

    International Nuclear Information System (INIS)

    Monte, Luigi

    2009-01-01

    The present work describes a model for predicting the population dynamics of the main components (resources and consumers) of terrestrial ecosystems exposed to ionising radiation. The ecosystem is modelled by the Lotka-Volterra equations with consumer competition. Linear dose-response relationships without threshold are assumed to relate the values of the model parameters to the dose rates. The model accounts for the migration of consumers from areas characterised by different levels of radionuclide contamination. The criteria to select the model parameter values are motivated by accounting for the results of the empirical studies of past decades. Examples of predictions for long-term chronic exposure are reported and discussed.

  8. An exploration of the prevalence and predictors of work-related well-being among psychosocial oncology professionals: An application of the job demands-resources model.

    Science.gov (United States)

    Turnell, Adrienne; Rasmussen, Victoria; Butow, Phyllis; Juraskova, Ilona; Kirsten, Laura; Wiener, Lori; Patenaude, Andrea; Hoekstra-Weebers, Josette; Grassi, Luigi

    2016-02-01

    Burnout is reportedly high among oncology healthcare workers. Psychosocial oncologists may be particularly vulnerable to burnout. However, their work engagement may also be high, counteracting stress in the workplace. This study aimed to document the prevalence of both burnout and work engagement, and the predictors of both, utilizing the job demands-resources (JD-R) model, within a sample of psychosocial oncologists. Psychosocial-oncologist (N = 417) clinicians, recruited through 10 international and national psychosocial-oncology societies, completed an online questionnaire. Measures included demographic and work characteristics, burnout (the MBI-HSS Emotional Exhaustion (EE) and Depersonalization (DP) subscales), the Utrecht Work Engagement Scale, and measures of job demands and resources. High EE and DP was reported by 20.2 and 6.6% of participants, respectively, while 95.3% reported average to high work engagement. Lower levels of job resources and higher levels of job demands predicted greater burnout, as predicted by the JD-R model, but the predicted interaction between these characteristics and burnout was not significant. Higher levels of job resources predicted higher levels of work engagement. Burnout was surprisingly low and work engagement high in this sample. Nonetheless, one in five psychosocial oncologists have high EE. Our results suggest that both the positive (resources) and negative (demands) aspects of this work environment have an on impact burnout and engagement, offering opportunities for intervention. Theories such as the JD-R model can be useful in guiding research in this area.

  9. Developing a stochastic traffic volume prediction model for public-private partnership projects

    Science.gov (United States)

    Phong, Nguyen Thanh; Likhitruangsilp, Veerasak; Onishi, Masamitsu

    2017-11-01

    Transportation projects require an enormous amount of capital investment resulting from their tremendous size, complexity, and risk. Due to the limitation of public finances, the private sector is invited to participate in transportation project development. The private sector can entirely or partially invest in transportation projects in the form of Public-Private Partnership (PPP) scheme, which has been an attractive option for several developing countries, including Vietnam. There are many factors affecting the success of PPP projects. The accurate prediction of traffic volume is considered one of the key success factors of PPP transportation projects. However, only few research works investigated how to predict traffic volume over a long period of time. Moreover, conventional traffic volume forecasting methods are usually based on deterministic models which predict a single value of traffic volume but do not consider risk and uncertainty. This knowledge gap makes it difficult for concessionaires to estimate PPP transportation project revenues accurately. The objective of this paper is to develop a probabilistic traffic volume prediction model. First, traffic volumes were estimated following the Geometric Brownian Motion (GBM) process. Monte Carlo technique is then applied to simulate different scenarios. The results show that this stochastic approach can systematically analyze variations in the traffic volume and yield more reliable estimates for PPP projects.

  10. Tectonic predictions with mantle convection models

    Science.gov (United States)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough

  11. An Intercomparison of Model Predictions for an Urban Contamination Resulting from the Explosion of a Radiological Dispersal Device

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Won Tae; Jeong, Hyo Jun; Kim, Eun Han; Han, Moon Hee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2009-03-15

    The METRO-K is a model for a radiological dose assessment due to a radioactive contamination in the Korean urban environment. The model has been taken part in the Urban Remediation Working Group within the IAEA's (International Atomic Energy Agency) EMRAS (Environmental Modeling for Radiation Safety) program. The Working Croup designed for the intercomparison of radioactive contamination to be resulted from the explosion of a radiological dispersal device in a hypothetical city. This paper dealt intensively with a part among a lot of predictive results which had been performed in the EMRAS program. The predictive results of three different models (METRO-K, RESRAD-RDD, CPHR) were submitted to the Working Group. The gap of predictive results was due to the difference of mathematical modeling approaches, parameter values, understanding of assessors. Even if final results (for example, dose rates from contaminated surfaces which might affect to a receptor) are similar, the understanding on the contribution of contaminated surfaces showed a great difference. Judging from the authors, it is due to the lack of understanding and information on radioactive terrors as well as the social and cultural gaps which assessors have been experienced. Therefore, it can be known that the experience of assessors and their subjective judgements might be important factors to get reliable results. If the acquisition of a little additional information is possible, it was identified that the METRO-K might be a useful tool for decision support against contamination resulting from radioactive terrors by improving the existing model.

  12. An Intercomparison of Model Predictions for an Urban Contamination Resulting from the Explosion of a Radiological Dispersal Device

    International Nuclear Information System (INIS)

    Hwang, Won Tae; Jeong, Hyo Jun; Kim, Eun Han; Han, Moon Hee

    2009-01-01

    The METRO-K is a model for a radiological dose assessment due to a radioactive contamination in the Korean urban environment. The model has been taken part in the Urban Remediation Working Group within the IAEA's (International Atomic Energy Agency) EMRAS (Environmental Modeling for Radiation Safety) program. The Working Croup designed for the intercomparison of radioactive contamination to be resulted from the explosion of a radiological dispersal device in a hypothetical city. This paper dealt intensively with a part among a lot of predictive results which had been performed in the EMRAS program. The predictive results of three different models (METRO-K, RESRAD-RDD, CPHR) were submitted to the Working Group. The gap of predictive results was due to the difference of mathematical modeling approaches, parameter values, understanding of assessors. Even if final results (for example, dose rates from contaminated surfaces which might affect to a receptor) are similar, the understanding on the contribution of contaminated surfaces showed a great difference. Judging from the authors, it is due to the lack of understanding and information on radioactive terrors as well as the social and cultural gaps which assessors have been experienced. Therefore, it can be known that the experience of assessors and their subjective judgements might be important factors to get reliable results. If the acquisition of a little additional information is possible, it was identified that the METRO-K might be a useful tool for decision support against contamination resulting from radioactive terrors by improving the existing model.

  13. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  14. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  15. Aerodynamic Drag Scoping Work.

    Energy Technology Data Exchange (ETDEWEB)

    Voskuilen, Tyler [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Erickson, Lindsay Crowl [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knaus, Robert C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2018-02-01

    This memo summarizes the aerodynamic drag scoping work done for Goodyear in early FY18. The work is to evaluate the feasibility of using Sierra/Low-Mach (Fuego) for drag predictions of rolling tires, particularly focused on the effects of tire features such as lettering, sidewall geometry, rim geometry, and interaction with the vehicle body. The work is broken into two parts. Part 1 consisted of investigation of a canonical validation problem (turbulent flow over a cylinder) using existing tools with different meshes and turbulence models. Part 2 involved calculating drag differences over plate geometries with simple features (ridges and grooves) defined by Goodyear of approximately the size of interest for a tire. The results of part 1 show the level of noise to be expected in a drag calculation and highlight the sensitivity of absolute predictions to model parameters such as mesh size and turbulence model. There is 20-30% noise in the experimental measurements on the canonical cylinder problem, and a similar level of variation between different meshes and turbulence models. Part 2 shows that there is a notable difference in the predicted drag on the sample plate geometries, however, the computational cost of extending the LES model to a full tire would be significant. This cost could be reduced by implementation of more sophisticated wall and turbulence models (e.g. detached eddy simulations - DES) and by focusing the mesh refinement on feature subsets with the goal of comparing configurations rather than absolute predictivity for the whole tire.

  16. Modeling pitting growth data and predicting degradation trend

    International Nuclear Information System (INIS)

    Viglasky, T.; Awad, R.; Zeng, Z.; Riznic, J.

    2007-01-01

    A non-statistical modeling approach to predict material degradation is presented in this paper. In this approach, the original data series is processed using Accumulated Generating Operation (AGO). With the aid of the AGO which weakens the random fluctuation embedded in the data series, an approximately exponential curve is established. The generated data series described by the exponential curve is then modeled by a differential equation. The coefficients of the differential equation can be deduced by approximate difference formula based on least-squares algorithm. By solving the differential equation and processing an inverse AGO, a predictive model can be obtained. As this approach is not established on the basis of statistics, the prediction can be performed with a limited amount of data. Implementation of this approach is demonstrated by predicting the pitting growth rate in specimens and wear trend in steam generator tubes. The analysis results indicate that this approach provides a powerful tool with reasonable precision to predict material degradation. (author)

  17. Risk Prediction Models for Incident Heart Failure: A Systematic Review of Methodology and Model Performance.

    Science.gov (United States)

    Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M

    2017-09-01

    Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Mathematical model for dissolved oxygen prediction in Cirata ...

    African Journals Online (AJOL)

    This paper presents the implementation and performance of mathematical model to predict theconcentration of dissolved oxygen in Cirata Reservoir, West Java by using Artificial Neural Network (ANN). The simulation program was created using Visual Studio 2012 C# software with ANN model implemented in it. Prediction ...

  19. Regression models for predicting peak and continuous three-dimensional spinal loads during symmetric and asymmetric lifting tasks.

    Science.gov (United States)

    Fathallah, F A; Marras, W S; Parnianpour, M

    1999-09-01

    Most biomechanical assessments of spinal loading during industrial work have focused on estimating peak spinal compressive forces under static and sagittally symmetric conditions. The main objective of this study was to explore the potential of feasibly predicting three-dimensional (3D) spinal loading in industry from various combinations of trunk kinematics, kinetics, and subject-load characteristics. The study used spinal loading, predicted by a validated electromyography-assisted model, from 11 male participants who performed a series of symmetric and asymmetric lifts. Three classes of models were developed: (a) models using workplace, subject, and trunk motion parameters as independent variables (kinematic models); (b) models using workplace, subject, and measured moments variables (kinetic models); and (c) models incorporating workplace, subject, trunk motion, and measured moments variables (combined models). The results showed that peak 3D spinal loading during symmetric and asymmetric lifting were predicted equally well using all three types of regression models. Continuous 3D loading was predicted best using the combined models. When the use of such models is infeasible, the kinematic models can provide adequate predictions. Finally, lateral shear forces (peak and continuous) were consistently underestimated using all three types of models. The study demonstrated the feasibility of predicting 3D loads on the spine under specific symmetric and asymmetric lifting tasks without the need for collecting EMG information. However, further validation and development of the models should be conducted to assess and extend their applicability to lifting conditions other than those presented in this study. Actual or potential applications of this research include exposure assessment in epidemiological studies, ergonomic intervention, and laboratory task assessment.

  20. Predicting the Impacts of Intravehicular Displays on Driving Performance with Human Performance Modeling

    Science.gov (United States)

    Mitchell, Diane Kuhl; Wojciechowski, Josephine; Samms, Charneta

    2012-01-01

    A challenge facing the U.S. National Highway Traffic Safety Administration (NHTSA), as well as international safety experts, is the need to educate car drivers about the dangers associated with performing distraction tasks while driving. Researchers working for the U.S. Army Research Laboratory have developed a technique for predicting the increase in mental workload that results when distraction tasks are combined with driving. They implement this technique using human performance modeling. They have predicted workload associated with driving combined with cell phone use. In addition, they have predicted the workload associated with driving military vehicles combined with threat detection. Their technique can be used by safety personnel internationally to demonstrate the dangers of combining distracter tasks with driving and to mitigate the safety risks.