WorldWideScience

Sample records for model predicts critical

  1. Evaluating predictions of critical oxygen desaturation events

    International Nuclear Information System (INIS)

    ElMoaqet, Hisham; Tilbury, Dawn M; Ramachandran, Satya Krishna

    2014-01-01

    This paper presents a new approach for evaluating predictions of oxygen saturation levels in blood ( SpO 2 ). A performance metric based on a threshold is proposed to evaluate  SpO 2 predictions based on whether or not they are able to capture critical desaturations in the  SpO 2 time series of patients. We use linear auto-regressive models built using historical  SpO 2 data to predict critical desaturation events with the proposed metric. In 20 s prediction intervals, 88%–94% of the critical events were captured with positive predictive values (PPVs) between 90% and 99%. Increasing the prediction horizon to 60 s, 46%–71% of the critical events were detected with PPVs between 81% and 97%. In both prediction horizons, more than 97% of the non-critical events were correctly classified. The overall classification capabilities for the developed predictive models were also investigated. The area under ROC curves for 60 s predictions from the developed models are between 0.86 and 0.98. Furthermore, we investigate the effect of including pulse rate (PR) dynamics in the models and predictions. We show no improvement in the percentage of the predicted critical desaturations if PR dynamics are incorporated into the  SpO 2 predictive models (p-value = 0.814). We also show that including the PR dynamics does not improve the earliest time at which critical  SpO 2 levels are predicted (p-value = 0.986). Our results indicate oxygen in blood is an effective input to the PR rather than vice versa. We demonstrate that the combination of predictive models with frequent pulse oximetry measurements can be used as a warning of critical oxygen desaturations that may have adverse effects on the health of patients. (paper)

  2. A new risk prediction model for critical care: the Intensive Care National Audit & Research Centre (ICNARC) model.

    Science.gov (United States)

    Harrison, David A; Parry, Gareth J; Carpenter, James R; Short, Alasdair; Rowan, Kathy

    2007-04-01

    To develop a new model to improve risk prediction for admissions to adult critical care units in the UK. Prospective cohort study. The setting was 163 adult, general critical care units in England, Wales, and Northern Ireland, December 1995 to August 2003. Patients were 216,626 critical care admissions. None. The performance of different approaches to modeling physiologic measurements was evaluated, and the best methods were selected to produce a new physiology score. This physiology score was combined with other information relating to the critical care admission-age, diagnostic category, source of admission, and cardiopulmonary resuscitation before admission-to develop a risk prediction model. Modeling interactions between diagnostic category and physiology score enabled the inclusion of groups of admissions that are frequently excluded from risk prediction models. The new model showed good discrimination (mean c index 0.870) and fit (mean Shapiro's R 0.665, mean Brier's score 0.132) in 200 repeated validation samples and performed well when compared with recalibrated versions of existing published risk prediction models in the cohort of patients eligible for all models. The hypothesis of perfect fit was rejected for all models, including the Intensive Care National Audit & Research Centre (ICNARC) model, as is to be expected in such a large cohort. The ICNARC model demonstrated better discrimination and overall fit than existing risk prediction models, even following recalibration of these models. We recommend it be used to replace previously published models for risk adjustment in the UK.

  3. Prediction model of critical weight loss in cancer patients during particle therapy.

    Science.gov (United States)

    Zhang, Zhihong; Zhu, Yu; Zhang, Lijuan; Wang, Ziying; Wan, Hongwei

    2018-01-01

    The objective of this study is to investigate the predictors of critical weight loss in cancer patients receiving particle therapy, and build a prediction model based on its predictive factors. Patients receiving particle therapy were enroled between June 2015 and June 2016. Body weight was measured at the start and end of particle therapy. Association between critical weight loss (defined as >5%) during particle therapy and patients' demographic, clinical characteristic, pre-therapeutic nutrition risk screening (NRS 2002) and BMI were evaluated by logistic regression and decision tree analysis. Finally, 375 cancer patients receiving particle therapy were included. Mean weight loss was 0.55 kg, and 11.5% of patients experienced critical weight loss during particle therapy. The main predictors of critical weight loss during particle therapy were head and neck tumour location, total radiation dose ≥70 Gy on the primary tumour, and without post-surgery, as indicated by both logistic regression and decision tree analysis. Prediction model that includes tumour locations, total radiation dose and post-surgery had a good predictive ability, with the area under receiver operating characteristic curve 0.79 (95% CI: 0.71-0.88) and 0.78 (95% CI: 0.69-0.86) for decision tree and logistic regression model, respectively. Cancer patients with head and neck tumour location, total radiation dose ≥70 Gy and without post-surgery were at higher risk of critical weight loss during particle therapy, and early intensive nutrition counselling or intervention should be target at this population. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Critical exponents predicted by grouping of Feynman diagrams in φ4 model

    International Nuclear Information System (INIS)

    Kaupuzs, J.

    2001-01-01

    Different perturbation theory treatments of the Ginzburg-Landau phase transition model are discussed. This includes a criticism of the perturbative renormalization group (RG) approach and a proposal of a novel method providing critical exponents consistent with the known exact solutions in two dimensions. The usual perturbation theory is reorganized by appropriate grouping of Feynman diagrams of φ 4 model with O(n) symmetry. As a result, equations for calculation of the two-point correlation function are obtained which allow to predict possible exact values of critical exponents in two and three dimensions by proving relevant scaling properties of the asymptotic solution at (and near) the criticality. The new values of critical exponents are discussed and compared to the results of numerical simulations and experiments. (orig.)

  5. A Critical Plane-energy Model for Multiaxial Fatigue Life Prediction of Homogeneous and Heterogeneous Materials

    Science.gov (United States)

    Wei, Haoyang

    A new critical plane-energy model is proposed in this thesis for multiaxial fatigue life prediction of homogeneous and heterogeneous materials. Brief review of existing methods, especially on the critical plane-based and energy-based methods, are given first. Special focus is on one critical plane approach which has been shown to work for both brittle and ductile metals. The key idea is to automatically change the critical plane orientation with respect to different materials and stress states. One potential drawback of the developed model is that it needs an empirical calibration parameter for non-proportional multiaxial loadings since only the strain terms are used and the out-of-phase hardening cannot be considered. The energy-based model using the critical plane concept is proposed with help of the Mroz-Garud hardening rule to explicitly include the effect of non-proportional hardening under fatigue cyclic loadings. Thus, the empirical calibration for non-proportional loading is not needed since the out-of-phase hardening is naturally included in the stress calculation. The model predictions are compared with experimental data from open literature and it is shown the proposed model can work for both proportional and non-proportional loadings without the empirical calibration. Next, the model is extended for the fatigue analysis of heterogeneous materials integrating with finite element method. Fatigue crack initiation of representative volume of heterogeneous materials is analyzed using the developed critical plane-energy model and special focus is on the microstructure effect on the multiaxial fatigue life predictions. Several conclusions and future work is drawn based on the proposed study.

  6. Method of critical power prediction based on film flow model coupled with subchannel analysis

    International Nuclear Information System (INIS)

    Tomiyama, Akio; Yokomizo, Osamu; Yoshimoto, Yuichiro; Sugawara, Satoshi.

    1988-01-01

    A new method was developed to predict critical powers for a wide variety of BWR fuel bundle designs. This method couples subchannel analysis with a liquid film flow model, instead of taking the conventional way which couples subchannel analysis with critical heat flux correlations. Flow and quality distributions in a bundle are estimated by the subchannel analysis. Using these distributions, film flow rates along fuel rods are then calculated with the film flow model. Dryout is assumed to occur where one of the film flows disappears. This method is expected to give much better adaptability to variations in geometry, heat flux, flow rate and quality distributions than the conventional methods. In order to verify the method, critical power data under BWR conditions were analyzed. Measured and calculated critical powers agreed to within ±7%. Furthermore critical power data for a tight-latticed bundle obtained by LeTourneau et al. were compared with critical powers calculated by the present method and two conventional methods, CISE correlation and subchannel analysis coupled with the CISE correlation. It was confirmed that the present method can predict critical powers more accurately than the conventional methods. (author)

  7. A mathematical model for predicting glucose levels in critically-ill patients: the PIGnOLI model

    Directory of Open Access Journals (Sweden)

    Zhongheng Zhang

    2015-06-01

    Full Text Available Background and Objectives. Glycemic control is of paramount importance in the intensive care unit. Presently, several BG control algorithms have been developed for clinical trials, but they are mostly based on experts’ opinion and consensus. There are no validated models predicting how glucose levels will change after initiating of insulin infusion in critically ill patients. The study aimed to develop an equation for initial insulin dose setting.Methods. A large critical care database was employed for the study. Linear regression model fitting was employed. Retested blood glucose was used as the independent variable. Insulin rate was forced into the model. Multivariable fractional polynomials and interaction terms were used to explore the complex relationships among covariates. The overall fit of the model was examined by using residuals and adjusted R-squared values. Regression diagnostics were used to explore the influence of outliers on the model.Main Results. A total of 6,487 ICU admissions requiring insulin pump therapy were identified. The dataset was randomly split into two subsets at 7 to 3 ratio. The initial model comprised fractional polynomials and interactions terms. However, this model was not stable by excluding several outliers. I fitted a simple linear model without interaction. The selected prediction model (Predicting Glucose Levels in ICU, PIGnOLI included variables of initial blood glucose, insulin rate, PO volume, total parental nutrition, body mass index (BMI, lactate, congestive heart failure, renal failure, liver disease, time interval of BS recheck, dextrose rate. Insulin rate was significantly associated with blood glucose reduction (coefficient: −0.52, 95% CI [−1.03, −0.01]. The parsimonious model was well validated with the validation subset, with an adjusted R-squared value of 0.8259.Conclusion. The study developed the PIGnOLI model for the initial insulin dose setting. Furthermore, experimental study is

  8. Prediction of Critical Power and W' in Hypoxia: Application to Work-Balance Modelling.

    Science.gov (United States)

    Townsend, Nathan E; Nichols, David S; Skiba, Philip F; Racinais, Sebastien; Périard, Julien D

    2017-01-01

    Purpose: Develop a prediction equation for critical power (CP) and work above CP (W') in hypoxia for use in the work-balance ([Formula: see text]) model. Methods: Nine trained male cyclists completed cycling time trials (TT; 12, 7, and 3 min) to determine CP and W' at five altitudes (250, 1,250, 2,250, 3,250, and 4,250 m). Least squares regression was used to predict CP and W' at altitude. A high-intensity intermittent test (HIIT) was performed at 250 and 2,250 m. Actual and predicted CP and W' were used to compute W' during HIIT using differential ([Formula: see text]) and integral ([Formula: see text]) forms of the [Formula: see text] model. Results: CP decreased at altitude ( P equations for CP and W' developed in this study are suitable for use with the [Formula: see text] model in acute hypoxia. This enables the application of [Formula: see text] modelling to training prescription and competition analysis at altitude.

  9. Predicting recovery from acute kidney injury in critically ill patients

    DEFF Research Database (Denmark)

    Itenov, Theis S; Berthelsen, Rasmus Ehrenfried; Jensen, Jens-Ulrik

    2018-01-01

    these patients. DESIGN: Observational study with development and validation of a risk prediction model. SETTING: Nine academic ICUs in Denmark. PARTICIPANTS: Development cohort of critically ill patients with AKI at ICU admission from the Procalcitonin and Survival Study cohort (n = 568), validation cohort.......1%. CONCLUSION: We constructed and validated a simple model that can predict the chance of recovery from AKI in critically ill patients....

  10. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  11. Nuclear criticality predictability

    International Nuclear Information System (INIS)

    Briggs, J.B.

    1999-01-01

    As a result of lots of efforts, a large portion of the tedious and redundant research and processing of critical experiment data has been eliminated. The necessary step in criticality safety analyses of validating computer codes with benchmark critical data is greatly streamlined, and valuable criticality safety experimental data is preserved. Criticality safety personnel in 31 different countries are now using the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'. Much has been accomplished by the work of the ICSBEP. However, evaluation and documentation represents only one element of a successful Nuclear Criticality Safety Predictability Program and this element only exists as a separate entity, because this work was not completed in conjunction with the experimentation process. I believe; however, that the work of the ICSBEP has also served to unify the other elements of nuclear criticality predictability. All elements are interrelated, but for a time it seemed that communications between these elements was not adequate. The ICSBEP has highlighted gaps in data, has retrieved lost data, has helped to identify errors in cross section processing codes, and has helped bring the international criticality safety community together in a common cause as true friends and colleagues. It has been a privilege to associate with those who work so diligently to make the project a success. (J.P.N.)

  12. Prediction of critical heat flux using ANFIS

    Energy Technology Data Exchange (ETDEWEB)

    Zaferanlouei, Salman, E-mail: zaferanlouei@gmail.co [Nuclear Engineering and Physics Department, Faculty of Nuclear Engineering, Center of Excellence in Nuclear Engineering, Amirkabir University of Technology (Tehran Polytechnic), 424 Hafez Avenue, Tehran (Iran, Islamic Republic of); Rostamifard, Dariush; Setayeshi, Saeed [Nuclear Engineering and Physics Department, Faculty of Nuclear Engineering, Center of Excellence in Nuclear Engineering, Amirkabir University of Technology (Tehran Polytechnic), 424 Hafez Avenue, Tehran (Iran, Islamic Republic of)

    2010-06-15

    The prediction of Critical Heat Flux (CHF) is essential for water cooled nuclear reactors since it is an important parameter for the economic efficiency and safety of nuclear power plants. Therefore, in this study using Adaptive Neuro-Fuzzy Inference System (ANFIS), a new flexible tool is developed to predict CHF. The process of training and testing in this model is done by using a set of available published field data. The CHF values predicted by the ANFIS model are acceptable compared with the other prediction methods. We improve the ANN model that is proposed by to avoid overfitting. The obtained new ANN test errors are compared with ANFIS model test errors, subsequently. It is found that the ANFIS model with root mean square (RMS) test errors of 4.79%, 5.04% and 11.39%, in fixed inlet conditions and local conditions and fixed outlet conditions, respectively, has superior performance in predicting the CHF than the test error obtained from MLP Neural Network in fixed inlet and outlet conditions, however, ANFIS also has acceptable result to predict CHF in fixed local conditions.

  13. Prediction of critical heat flux using ANFIS

    International Nuclear Information System (INIS)

    Zaferanlouei, Salman; Rostamifard, Dariush; Setayeshi, Saeed

    2010-01-01

    The prediction of Critical Heat Flux (CHF) is essential for water cooled nuclear reactors since it is an important parameter for the economic efficiency and safety of nuclear power plants. Therefore, in this study using Adaptive Neuro-Fuzzy Inference System (ANFIS), a new flexible tool is developed to predict CHF. The process of training and testing in this model is done by using a set of available published field data. The CHF values predicted by the ANFIS model are acceptable compared with the other prediction methods. We improve the ANN model that is proposed by to avoid overfitting. The obtained new ANN test errors are compared with ANFIS model test errors, subsequently. It is found that the ANFIS model with root mean square (RMS) test errors of 4.79%, 5.04% and 11.39%, in fixed inlet conditions and local conditions and fixed outlet conditions, respectively, has superior performance in predicting the CHF than the test error obtained from MLP Neural Network in fixed inlet and outlet conditions, however, ANFIS also has acceptable result to predict CHF in fixed local conditions.

  14. A general unified non-equilibrium model for predicting saturated and subcooled critical two-phase flow rates through short and long tubes

    International Nuclear Information System (INIS)

    Fraser, D.W.H.; Abdelmessih, A.H.

    1995-01-01

    A general unified model is developed to predict one-component critical two-phase pipe flow. Modelling of the two-phase flow is accomplished by describing the evolution of the flow between the location of flashing inception and the exit (critical) plane. The model approximates the nonequilibrium phase change process via thermodynamic equilibrium paths. Included are the relative effects of varying the location of flashing inception, pipe geometry, fluid properties and length to diameter ratio. The model predicts that a range of critical mass fluxes exist and is bound by a maximum and minimum value for a given thermodynamic state. This range is more pronounced at lower subcooled stagnation states and can be attributed to the variation in the location of flashing inception. The model is based on the results of an experimental study of the critical two-phase flow of saturated and subcooled water through long tubes. In that study, the location of flashing inception was accurately controlled and adjusted through the use of a new device. The data obtained revealed that for fixed stagnation conditions, the maximum critical mass flux occurred with flashing inception located near the pipe exit; while minimum critical mass fluxes occurred with the flashing front located further upstream. Available data since 1970 for both short and long tubes over a wide range of conditions are compared with the model predictions. This includes test section L/D ratios from 25 to 300 and covers a temperature and pressure range of 110 to 280 degrees C and 0.16 to 6.9 MPa. respectively. The predicted maximum and minimum critical mass fluxes show an excellent agreement with the range observed in the experimental data

  15. A general unified non-equilibrium model for predicting saturated and subcooled critical two-phase flow rates through short and long tubes

    Energy Technology Data Exchange (ETDEWEB)

    Fraser, D.W.H. [Univ. of British Columbia (Canada); Abdelmessih, A.H. [Univ. of Toronto, Ontario (Canada)

    1995-09-01

    A general unified model is developed to predict one-component critical two-phase pipe flow. Modelling of the two-phase flow is accomplished by describing the evolution of the flow between the location of flashing inception and the exit (critical) plane. The model approximates the nonequilibrium phase change process via thermodynamic equilibrium paths. Included are the relative effects of varying the location of flashing inception, pipe geometry, fluid properties and length to diameter ratio. The model predicts that a range of critical mass fluxes exist and is bound by a maximum and minimum value for a given thermodynamic state. This range is more pronounced at lower subcooled stagnation states and can be attributed to the variation in the location of flashing inception. The model is based on the results of an experimental study of the critical two-phase flow of saturated and subcooled water through long tubes. In that study, the location of flashing inception was accurately controlled and adjusted through the use of a new device. The data obtained revealed that for fixed stagnation conditions, the maximum critical mass flux occurred with flashing inception located near the pipe exit; while minimum critical mass fluxes occurred with the flashing front located further upstream. Available data since 1970 for both short and long tubes over a wide range of conditions are compared with the model predictions. This includes test section L/D ratios from 25 to 300 and covers a temperature and pressure range of 110 to 280{degrees}C and 0.16 to 6.9 MPa. respectively. The predicted maximum and minimum critical mass fluxes show an excellent agreement with the range observed in the experimental data.

  16. Prediction of critical flow rates through power-operated relief valves

    International Nuclear Information System (INIS)

    Abdollahian, D.; Singh, A.

    1983-01-01

    Existing single-phase and two-phase critical flow models are used to predict the flow rates through the power-operated relief valves tested in the EPRI Safety and Relief Valve test program. For liquid upstream conditions, Homogeneous Equilibrium Model, Moody, Henry-Fauske and Burnell two-phase critical flow models are used for comparison with data. Under steam upstream conditions, the flow rates are predicted either by the single-phase isentropic equations or the Homogeneous Equilibrium Model, depending on the thermodynamic condition of the fluid at the choking plane. The results of the comparisons are used to specify discharge coefficients for different valves under steam and liquid upstream conditions and evaluate the existing approximate critical flow relations for a wide range of subcooled water and steam conditions

  17. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    Science.gov (United States)

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  18. Prediction model to predict critical weight loss in patients with head and neck cancer during (chemo)radiotherapy.

    Science.gov (United States)

    Langius, Jacqueline A E; Twisk, Jos; Kampman, Martine; Doornaert, Patricia; Kramer, Mark H H; Weijs, Peter J M; Leemans, C René

    2016-01-01

    Patients with head and neck cancer (HNC) frequently encounter weight loss with multiple negative outcomes as a consequence. Adequate treatment is best achieved by early identification of patients at risk for critical weight loss. The objective of this study was to detect predictive factors for critical weight loss in patients with HNC receiving (chemo)radiotherapy ((C)RT). In this cohort study, 910 patients with HNC were included receiving RT (±surgery/concurrent chemotherapy) with curative intent. Body weight was measured at the start and end of (C)RT. Logistic regression and classification and regression tree (CART) analyses were used to analyse predictive factors for critical weight loss (defined as >5%) during (C)RT. Possible predictors included gender, age, WHO performance status, tumour location, TNM classification, treatment modality, RT technique (three-dimensional conformal RT (3D-RT) vs intensity-modulated RT (IMRT)), total dose on the primary tumour and RT on the elective or macroscopic lymph nodes. At the end of (C)RT, mean weight loss was 5.1±4.9%. Fifty percent of patients had critical weight loss during (C)RT. The main predictors for critical weight loss during (C)RT by both logistic and CART analyses were RT on the lymph nodes, higher RT dose on the primary tumour, receiving 3D-RT instead of IMRT, and younger age. Critical weight loss during (C)RT was prevalent in half of HNC patients. To predict critical weight loss, a practical prediction tree for adequate nutritional advice was developed, including the risk factors RT to the neck, higher RT dose, 3D-RT, and younger age. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Critical velocity and anaerobic paddling capacity determined by different mathematical models and number of predictive trials in canoe slalom.

    Science.gov (United States)

    Messias, Leonardo H D; Ferrari, Homero G; Reis, Ivan G M; Scariot, Pedro P M; Manchado-Gobatto, Fúlvia B

    2015-03-01

    The purpose of this study was to analyze if different combinations of trials as well as mathematical models can modify the aerobic and anaerobic estimates from critical velocity protocol applied in canoe slalom. Fourteen male elite slalom kayakers from Brazilian canoe slalom team (K1) were evaluated. Athletes were submitted to four predictive trials of 150, 300, 450 and 600 meters in a lake and the time to complete each trial was recorded. Critical velocity (CV-aerobic parameter) and anaerobic paddling capacity (APC-anaerobic parameter) were obtained by three mathematical models (Linear1=distance-time; Linear 2=velocity-1/time and Non-Linear = time-velocity). Linear 1 was chosen for comparison of predictive trials combinations. Standard combination (SC) was considered as the four trials (150, 300, 450 and 600 m). High fits of regression were obtained from all mathematical models (range - R² = 0.96-1.00). Repeated measures ANOVA pointed out differences of all mathematical models for CV (p = 0.006) and APC (p = 0.016) as well as R² (p = 0.033). Estimates obtained from the first (1) and the fourth (4) predictive trials (150 m = lowest; and 600 m = highest, respectively) were similar and highly correlated (r=0.98 for CV and r = 0.96 for APC) with the SC. In summary, methodological aspects must be considered in critical velocity application in canoe slalom, since different combinations of trials as well as mathematical models resulted in different aerobic and anaerobic estimates. Key pointsGreat attention must be given for methodological concerns regarding critical velocity protocol applied on canoe slalom, since different estimates were obtained depending on the mathematical model and the predictive trials used.Linear 1 showed the best fits of regression. Furthermore, to the best of our knowledge and considering practical applications, this model is the easiest one to calculate the estimates from critical velocity protocol. Considering this, the abyss between science

  20. Advances in criticality predictions for EBR-II

    International Nuclear Information System (INIS)

    Schaefer, R.W.; Imel, G.R.

    1994-01-01

    Improvements to startup criticality predictions for the EBR-II reactor have been made. More exact calculational models, methods and data are now used, and better procedures for obtaining experimental data that enter into the prediction are in place. Accuracy improved by more than a factor of two and the largest ECP error observed since the changes is only 18 cents. An experimental method using subcritical counts is also being implemented

  1. External Validation and Recalibration of Risk Prediction Models for Acute Traumatic Brain Injury among Critically Ill Adult Patients in the United Kingdom.

    Science.gov (United States)

    Harrison, David A; Griggs, Kathryn A; Prabhu, Gita; Gomes, Manuel; Lecky, Fiona E; Hutchinson, Peter J A; Menon, David K; Rowan, Kathryn M

    2015-10-01

    This study validates risk prediction models for acute traumatic brain injury (TBI) in critical care units in the United Kingdom and recalibrates the models to this population. The Risk Adjustment In Neurocritical care (RAIN) Study was a prospective, observational cohort study in 67 adult critical care units. Adult patients admitted to critical care following acute TBI with a last pre-sedation Glasgow Coma Scale score of less than 15 were recruited. The primary outcomes were mortality and unfavorable outcome (death or severe disability, assessed using the Extended Glasgow Outcome Scale) at six months following TBI. Of 3626 critical care unit admissions, 2975 were analyzed. Following imputation of missing outcomes, mortality at six months was 25.7% and unfavorable outcome 57.4%. Ten risk prediction models were validated from Hukkelhoven and colleagues, the Medical Research Council (MRC) Corticosteroid Randomisation After Significant Head Injury (CRASH) Trial Collaborators, and the International Mission for Prognosis and Analysis of Clinical Trials in TBI (IMPACT) group. The model with the best discrimination was the IMPACT "Lab" model (C index, 0.779 for mortality and 0.713 for unfavorable outcome). This model was well calibrated for mortality at six months but substantially under-predicted the risk of unfavorable outcome. Recalibration of the models resulted in small improvements in discrimination and excellent calibration for all models. The risk prediction models demonstrated sufficient statistical performance to support their use in research and audit but fell below the level required to guide individual patient decision-making. The published models for unfavorable outcome at six months had poor calibration in the UK critical care setting and the models recalibrated to this setting should be used in future research.

  2. Critical power prediction by CATHARE2 of the OECD/NRC BFBT benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lutsanych, Sergii, E-mail: s.lutsanych@ing.unipi.it [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, 56122, San Piero a Grado, Pisa (Italy); Sabotinov, Luben, E-mail: luben.sabotinov@irsn.fr [Institut for Radiological Protection and Nuclear Safety (IRSN), 31 avenue de la Division Leclerc, 92262 Fontenay-aux-Roses (France); D’Auria, Francesco, E-mail: francesco.dauria@dimnp.unipi.it [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, 56122, San Piero a Grado, Pisa (Italy)

    2015-03-15

    Highlights: • We used CATHARE code to calculate the critical power exercises of the OECD/NRC BFBT benchmark. • We considered both steady-state and transient critical power tests of the benchmark. • We used both the 1D and 3D features of the CATHARE code to simulate the experiments. • Acceptable prediction of the critical power and its location in the bundle is obtained using appropriate modelling. - Abstract: This paper presents an application of the French best estimate thermal-hydraulic code CATHARE 2 to calculate the critical power and departure from nucleate boiling (DNB) exercises of the International OECD/NRC BWR Fuel Bundle Test (BFBT) benchmark. The assessment activity is performed comparing the code calculation results with available in the framework of the benchmark experimental data from Japanese Nuclear Power Engineering Corporation (NUPEC). Two-phase flow calculations on prediction of the critical power have been carried out both in steady state and transient cases, using one-dimensional and three-dimensional modelling. Results of the steady-state critical power tests calculation have shown the ability of CATHARE code to predict reasonably the critical power and its location, using appropriate modelling.

  3. Comparison of Critical Flow Models' Evaluations for SBLOCA Tests

    International Nuclear Information System (INIS)

    Kim, Yeon Sik; Park, Hyun Sik; Cho, Seok

    2016-01-01

    A comparison of critical flow models between the Trapp-Ransom and Henry-Fauske models for all SBLOCA (small break loss of coolant accident) scenarios of the ATLAS (Advanced thermal-hydraulic test loop for accident simulation) facility was performed using the MARS-KS code. For the comparison of the two critical models, the accumulated break mass was selected as the main parameter for the comparison between the analyses and tests. Four cases showed the same respective discharge coefficients between the two critical models, e.g., 6' CL (cold leg) break and 25%, 50%, and 100% DVI (direct vessel injection) breaks. In the case of the 4' CL break, no reasonable results were obtained with any possible Cd values. In addition, typical system behaviors, e.g., PZR (pressurizer) pressure and collapsed core water level, were also compared between the two critical models. Four cases showed the same respective discharge coefficients between the two critical models, e.g., 6' CL break and 25%, 50%, and 100% DVI breaks. In the case of the 4' CL break, no reasonable results were obtained with any possible Cd values. In addition, typical system behaviors, e.g., PZR pressure and collapsed core water level, were also compared between the two critical models. From the comparison between the two critical models for the CL breaks, the Trapp-Ransom model predicted quite well with respect to the other model for the smallest and larger breaks, e.g., 2', 6', and 8.5' CL breaks. In addition, from the comparison between the two critical models for the DVI breaks, the Trapp-Ransom model predicted quite well with respect to the other model for the smallest and larger breaks, e.g., 5%, 50%, and 100% DVI breaks. In the case of the 50% and 100% breaks, the two critical models predicted the test data quite well.

  4. A dry-spot model for the prediction of critical heat flux in water boiling in bubbly flow regime

    International Nuclear Information System (INIS)

    Ha, Sang Jun; No, Hee Cheon

    1997-01-01

    This paper presents a prediction of critical heat flux (CHF) in bubbly flow regime using dry-spot model proposed recently by authors for pool and flow boiling CHF and existing correlations for forced convective heat transfer coefficient, active site density and bubble departure diameter in nucleate boiling region. Without any empirical constants always present in earlier models, comparisons of the model predictions with experimental data for upward flow of water in vertical, uniformly-heated round tubes are performed and show a good agreement. The parametric trends of CHF have been explored with respect to variation in pressure, tube diameter and length, mass flux and inlet subcooling

  5. [Establishment of comprehensive prediction model of acute gastrointestinal injury classification of critically ill patients].

    Science.gov (United States)

    Wang, Yan; Wang, Jianrong; Liu, Weiwei; Zhang, Guangliang

    2018-03-25

    To develop the comprehensive prediction model of acute gastrointestinal injury (AGI) grades of critically ill patients. From April 2015 to November 2015, the binary channel gastrointestinal sounds (GIS) monitor system which has been developed and verified by the research group was used to gather and analyze the GIS of 60 consecutive critically ill patients who were admitted in Critical Care Medicine of Chinese PLA General Hospital. Also, the AGI grades (Grande I(-IIII(, the higher the level, the heavier the gastrointestinal dysfunction) were evaluated. Meanwhile, the clinical data and physiological and biochemical indexes of included patients were collected and recorded daily, including illness severity score (APACHE II( score, consisting of the acute physiology score, age grade and chronic health evaluation), sequential organ failure assessment (SOFA score, including respiration, coagulation, liver, cardioascular, central nervous system and kidney) and Glasgow coma scale (GCS); body mass index, blood lactate and glucose, and treatment details (including mechanical ventilation, sedatives, vasoactive drugs, enteral nutrition, etc.) Then principal component analysis was performed on the significantly correlated GIS (five indexes of gastrointestinal sounds were found to be negatively correlated with AGI grades, which included the number, percentage of time, mean power, maximum power and maximum time of GIS wave from the channel located at the stomach) and clinical factors after standardization. The top 5 post-normalized main components were selected for back-propagation (BP) neural network training, to establish comprehensive AGI grades models of critically ill patients based on the neural network model. The 60 patients aged 19 to 98 (mean 54.6) years and included 42 males (70.0%). There were 22 cases of multiple fractures, 15 cases of severe infection, 7 cases of cervical vertebral fracture, 7 cases of aortic repair, 5 cases of post-toxicosis and 4 cases of cerebral

  6. A dry-spot model for the prediction of critical heat flux in water boiling in bubbly flow regime

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Sang Jun; No, Hee Cheon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    This paper presents a prediction of critical heat flux (CHF) in bubbly flow regime using dry-spot model proposed recently by authors for pool and flow boiling CHF and existing correlations for forced convective heat transfer coefficient, active site density and bubble departure diameter in nucleate boiling region. Without any empirical constants always present in earlier models, comparisons of the model predictions with experimental data for upward flow of water in vertical, uniformly-heated round tubes are performed and show a good agreement. The parametric trends of CHF have been explored with respect to variations in pressure, tube diameter and length, mass flux and inlet subcooling. 16 refs., 6 figs., 1 tab. (Author)

  7. A dry-spot model for the prediction of critical heat flux in water boiling in bubbly flow regime

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Sang Jun; No, Hee Cheon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    This paper presents a prediction of critical heat flux (CHF) in bubbly flow regime using dry-spot model proposed recently by authors for pool and flow boiling CHF and existing correlations for forced convective heat transfer coefficient, active site density and bubble departure diameter in nucleate boiling region. Without any empirical constants always present in earlier models, comparisons of the model predictions with experimental data for upward flow of water in vertical, uniformly-heated round tubes are performed and show a good agreement. The parametric trends of CHF have been explored with respect to variations in pressure, tube diameter and length, mass flux and inlet subcooling. 16 refs., 6 figs., 1 tab. (Author)

  8. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...... seen as necessary in order to identify aggregate level effects of policy measures, but are questioned by many advocates of critical realist ontology. Using research into the relationship between urban structure and travel as an example, the paper discusses relevant research methods and the kinds...

  9. Modeling the prediction of business intelligence system effectiveness.

    Science.gov (United States)

    Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I

    2016-01-01

    Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important.

  10. Prediction of critical heat flux in vertical pipe flow

    International Nuclear Information System (INIS)

    Levy, S.; Healzer, J.M.; Abdollahian, D.

    1981-01-01

    A previously developed semi-empirical model for adiabatic two-phase annular flow ix extended to predict the critical heat flux (CHF) in a vertical pipe. The model exhibits a sharply declining curve of CHF versus steam quality (X) at low X, and is relatively independent of the heat flux distribution. In this region, vaporization of the liquid film controls. At high X, net deposition upon the liquid film becomes important and CHF versus X flattens considerably. In this zone, CHF is dependent upon the heat flux distribution. Model predictions are compared to test data and an empirical correlation. The agreement is generally good if one employs previously reported mass transfer coefficients. (orig.)

  11. Continuous Automated Model EvaluatiOn (CAMEO) complementing the critical assessment of structure prediction in CASP12.

    Science.gov (United States)

    Haas, Jürgen; Barbato, Alessandro; Behringer, Dario; Studer, Gabriel; Roth, Steven; Bertoni, Martino; Mostaguir, Khaled; Gumienny, Rafal; Schwede, Torsten

    2018-03-01

    Every second year, the community experiment "Critical Assessment of Techniques for Structure Prediction" (CASP) is conducting an independent blind assessment of structure prediction methods, providing a framework for comparing the performance of different approaches and discussing the latest developments in the field. Yet, developers of automated computational modeling methods clearly benefit from more frequent evaluations based on larger sets of data. The "Continuous Automated Model EvaluatiOn (CAMEO)" platform complements the CASP experiment by conducting fully automated blind prediction assessments based on the weekly pre-release of sequences of those structures, which are going to be published in the next release of the PDB Protein Data Bank. CAMEO publishes weekly benchmarking results based on models collected during a 4-day prediction window, on average assessing ca. 100 targets during a time frame of 5 weeks. CAMEO benchmarking data is generated consistently for all participating methods at the same point in time, enabling developers to benchmark and cross-validate their method's performance, and directly refer to the benchmarking results in publications. In order to facilitate server development and promote shorter release cycles, CAMEO sends weekly email with submission statistics and low performance warnings. Many participants of CASP have successfully employed CAMEO when preparing their methods for upcoming community experiments. CAMEO offers a variety of scores to allow benchmarking diverse aspects of structure prediction methods. By introducing new scoring schemes, CAMEO facilitates new development in areas of active research, for example, modeling quaternary structure, complexes, or ligand binding sites. © 2017 Wiley Periodicals, Inc.

  12. Sensitivity of predictions in an effective model: Application to the chiral critical end point position in the Nambu-Jona-Lasinio model

    International Nuclear Information System (INIS)

    Biguet, Alexandre; Hansen, Hubert; Brugiere, Timothee; Costa, Pedro; Borgnat, Pierre

    2015-01-01

    The measurement of the position of the chiral critical end point (CEP) in the QCD phase diagram is under debate. While it is possible to predict its position by using effective models specifically built to reproduce some of the features of the underlying theory (QCD), the quality of the predictions (e.g., the CEP position) obtained by such effective models, depends on whether solving the model equations constitute a well- or ill-posed inverse problem. Considering these predictions as being inverse problems provides tools to evaluate if the problem is ill-conditioned, meaning that infinitesimal variations of the inputs of the model can cause comparatively large variations of the predictions. If it is ill-conditioned, it has major consequences because of finite variations that could come from experimental and/or theoretical errors. In the following, we shall apply such a reasoning on the predictions of a particular Nambu-Jona-Lasinio model within the mean field + ring approximations, with special attention to the prediction of the chiral CEP position in the (T-μ) plane. We find that the problem is ill-conditioned (i.e. very sensitive to input variations) for the T-coordinate of the CEP, whereas, it is well-posed for the μ-coordinate of the CEP. As a consequence, when the chiral condensate varies in a 10MeV range, μ CEP varies far less. As an illustration to understand how problematic this could be, we show that the main consequence when taking into account finite variation of the inputs, is that the existence of the CEP itself cannot be predicted anymore: for a deviation as low as 0.6% with respect to vacuum phenomenology (well within the estimation of the first correction to the ring approximation) the CEP may or may not exist. (orig.)

  13. Sensitivity of predictions in an effective model: Application to the chiral critical end point position in the Nambu-Jona-Lasinio model

    Energy Technology Data Exchange (ETDEWEB)

    Biguet, Alexandre; Hansen, Hubert; Brugiere, Timothee [Universite Claude Bernard de Lyon, Institut de Physique Nucleaire de Lyon, CNRS/IN2P3, Villeurbanne Cedex (France); Costa, Pedro [Universidade de Coimbra, Centro de Fisica Computacional, Departamento de Fisica, Coimbra (Portugal); Borgnat, Pierre [CNRS, l' Ecole normale superieure de Lyon, Laboratoire de Physique, Lyon Cedex 07 (France)

    2015-09-15

    The measurement of the position of the chiral critical end point (CEP) in the QCD phase diagram is under debate. While it is possible to predict its position by using effective models specifically built to reproduce some of the features of the underlying theory (QCD), the quality of the predictions (e.g., the CEP position) obtained by such effective models, depends on whether solving the model equations constitute a well- or ill-posed inverse problem. Considering these predictions as being inverse problems provides tools to evaluate if the problem is ill-conditioned, meaning that infinitesimal variations of the inputs of the model can cause comparatively large variations of the predictions. If it is ill-conditioned, it has major consequences because of finite variations that could come from experimental and/or theoretical errors. In the following, we shall apply such a reasoning on the predictions of a particular Nambu-Jona-Lasinio model within the mean field + ring approximations, with special attention to the prediction of the chiral CEP position in the (T-μ) plane. We find that the problem is ill-conditioned (i.e. very sensitive to input variations) for the T-coordinate of the CEP, whereas, it is well-posed for the μ-coordinate of the CEP. As a consequence, when the chiral condensate varies in a 10MeV range, μ {sub CEP} varies far less. As an illustration to understand how problematic this could be, we show that the main consequence when taking into account finite variation of the inputs, is that the existence of the CEP itself cannot be predicted anymore: for a deviation as low as 0.6% with respect to vacuum phenomenology (well within the estimation of the first correction to the ring approximation) the CEP may or may not exist. (orig.)

  14. Development and validation of a prediction model for insulin-associated hypoglycemia in non-critically ill hospitalized adults.

    Science.gov (United States)

    Mathioudakis, Nestoras Nicolas; Everett, Estelle; Routh, Shuvodra; Pronovost, Peter J; Yeh, Hsin-Chieh; Golden, Sherita Hill; Saria, Suchi

    2018-01-01

    To develop and validate a multivariable prediction model for insulin-associated hypoglycemia in non-critically ill hospitalized adults. We collected pharmacologic, demographic, laboratory, and diagnostic data from 128 657 inpatient days in which at least 1 unit of subcutaneous insulin was administered in the absence of intravenous insulin, total parenteral nutrition, or insulin pump use (index days). These data were used to develop multivariable prediction models for biochemical and clinically significant hypoglycemia (blood glucose (BG) of ≤70 mg/dL and model development and validation, respectively. Using predictors of age, weight, admitting service, insulin doses, mean BG, nadir BG, BG coefficient of variation (CV BG ), diet status, type 1 diabetes, type 2 diabetes, acute kidney injury, chronic kidney disease (CKD), liver disease, and digestive disease, our model achieved a c-statistic of 0.77 (95% CI 0.75 to 0.78), positive likelihood ratio (+LR) of 3.5 (95% CI 3.4 to 3.6) and negative likelihood ratio (-LR) of 0.32 (95% CI 0.30 to 0.35) for prediction of biochemical hypoglycemia. Using predictors of sex, weight, insulin doses, mean BG, nadir BG, CV BG , diet status, type 1 diabetes, type 2 diabetes, CKD stage, and steroid use, our model achieved a c-statistic of 0.80 (95% CI 0.78 to 0.82), +LR of 3.8 (95% CI 3.7 to 4.0) and -LR of 0.2 (95% CI 0.2 to 0.3) for prediction of clinically significant hypoglycemia. Hospitalized patients at risk of insulin-associated hypoglycemia can be identified using validated prediction models, which may support the development of real-time preventive interventions.

  15. Critical Features of Fragment Libraries for Protein Structure Prediction.

    Science.gov (United States)

    Trevizani, Raphael; Custódio, Fábio Lima; Dos Santos, Karina Baptista; Dardenne, Laurent Emmanuel

    2017-01-01

    The use of fragment libraries is a popular approach among protein structure prediction methods and has proven to substantially improve the quality of predicted structures. However, some vital aspects of a fragment library that influence the accuracy of modeling a native structure remain to be determined. This study investigates some of these features. Particularly, we analyze the effect of using secondary structure prediction guiding fragments selection, different fragments sizes and the effect of structural clustering of fragments within libraries. To have a clearer view of how these factors affect protein structure prediction, we isolated the process of model building by fragment assembly from some common limitations associated with prediction methods, e.g., imprecise energy functions and optimization algorithms, by employing an exact structure-based objective function under a greedy algorithm. Our results indicate that shorter fragments reproduce the native structure more accurately than the longer. Libraries composed of multiple fragment lengths generate even better structures, where longer fragments show to be more useful at the beginning of the simulations. The use of many different fragment sizes shows little improvement when compared to predictions carried out with libraries that comprise only three different fragment sizes. Models obtained from libraries built using only sequence similarity are, on average, better than those built with a secondary structure prediction bias. However, we found that the use of secondary structure prediction allows greater reduction of the search space, which is invaluable for prediction methods. The results of this study can be critical guidelines for the use of fragment libraries in protein structure prediction.

  16. Predictive modelling of survival and length of stay in critically ill patients using sequential organ failure scores.

    Science.gov (United States)

    Houthooft, Rein; Ruyssinck, Joeri; van der Herten, Joachim; Stijven, Sean; Couckuyt, Ivo; Gadeyne, Bram; Ongenae, Femke; Colpaert, Kirsten; Decruyenaere, Johan; Dhaene, Tom; De Turck, Filip

    2015-03-01

    The length of stay of critically ill patients in the intensive care unit (ICU) is an indication of patient ICU resource usage and varies considerably. Planning of postoperative ICU admissions is important as ICUs often have no nonoccupied beds available. Estimation of the ICU bed availability for the next coming days is entirely based on clinical judgement by intensivists and therefore too inaccurate. For this reason, predictive models have much potential for improving planning for ICU patient admission. Our goal is to develop and optimize models for patient survival and ICU length of stay (LOS) based on monitored ICU patient data. Furthermore, these models are compared on their use of sequential organ failure (SOFA) scores as well as underlying raw data as input features. Different machine learning techniques are trained, using a 14,480 patient dataset, both on SOFA scores as well as their underlying raw data values from the first five days after admission, in order to predict (i) the patient LOS, and (ii) the patient mortality. Furthermore, to help physicians in assessing the prediction credibility, a probabilistic model is tailored to the output of our best-performing model, assigning a belief to each patient status prediction. A two-by-two grid is built, using the classification outputs of the mortality and prolonged stay predictors to improve the patient LOS regression models. For predicting patient mortality and a prolonged stay, the best performing model is a support vector machine (SVM) with GA,D=65.9% (area under the curve (AUC) of 0.77) and GS,L=73.2% (AUC of 0.82). In terms of LOS regression, the best performing model is support vector regression, achieving a mean absolute error of 1.79 days and a median absolute error of 1.22 days for those patients surviving a nonprolonged stay. Using a classification grid based on the predicted patient mortality and prolonged stay, allows more accurate modeling of the patient LOS. The detailed models allow to support

  17. System Predicts Critical Runway Performance Parameters

    Science.gov (United States)

    Millen, Ernest W.; Person, Lee H., Jr.

    1990-01-01

    Runway-navigation-monitor (RNM) and critical-distances-process electronic equipment designed to provide pilot with timely and reliable predictive navigation information relating to takeoff, landing and runway-turnoff operations. Enables pilot to make critical decisions about runway maneuvers with high confidence during emergencies. Utilizes ground-referenced position data only to drive purely navigational monitor system independent of statuses of systems in aircraft.

  18. A New Energy-Critical Plane Damage Parameter for Multiaxial Fatigue Life Prediction of Turbine Blades

    Directory of Open Access Journals (Sweden)

    Zheng-Yong Yu

    2017-05-01

    Full Text Available As one of fracture critical components of an aircraft engine, accurate life prediction of a turbine blade to disk attachment is significant for ensuring the engine structural integrity and reliability. Fatigue failure of a turbine blade is often caused under multiaxial cyclic loadings at high temperatures. In this paper, considering different failure types, a new energy-critical plane damage parameter is proposed for multiaxial fatigue life prediction, and no extra fitted material constants will be needed for practical applications. Moreover, three multiaxial models with maximum damage parameters on the critical plane are evaluated under tension-compression and tension-torsion loadings. Experimental data of GH4169 under proportional and non-proportional fatigue loadings and a case study of a turbine disk-blade contact system are introduced for model validation. Results show that model predictions by Wang-Brown (WB and Fatemi-Socie (FS models with maximum damage parameters are conservative and acceptable. For the turbine disk-blade contact system, both of the proposed damage parameters and Smith-Watson-Topper (SWT model show reasonably acceptable correlations with its field number of flight cycles. However, life estimations of the turbine blade reveal that the definition of the maximum damage parameter is not reasonable for the WB model but effective for both the FS and SWT models.

  19. Criticality Model

    International Nuclear Information System (INIS)

    Alsaed, A.

    2004-01-01

    The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of this analysis is to document the criticality computational method. The criticality

  20. Safety prediction for basic components of safety-critical software based on static testing

    International Nuclear Information System (INIS)

    Son, H.S.; Seong, P.H.

    2000-01-01

    The purpose of this work is to develop a safety prediction method, with which we can predict the risk of software components based on static testing results at the early development stage. The predictive model combines the major factor with the quality factor for the components, which are calculated based on the measures proposed in this work. The application to a safety-critical software system demonstrates the feasibility of the safety prediction method. (authors)

  1. A study on the development of advanced models to predict the critical heat flux for water and liquid metals

    International Nuclear Information System (INIS)

    Lee, Yong Bum

    1994-02-01

    The critical heat flux (CHF) phenomenon in the two-phase convective flows has been an important issue in the fields of design and safety analysis of light water reactor (LWR) as well as sodium cooled liquid metal fast breeder reactor (LMFBR). Especially in the LWR application many physical aspects of the CHF phenomenon are understood and reliable correlations and mechanistic models to predict the CHF condition have been proposed. However, there are few correlations and models which are applicable to liquid metals. Compared with water, liquid metals show a divergent picture for boiling pattern. Therefore, the CHF conditions obtained from investigations with water cannot be applied to liquid metals. In this work a mechanistic model to predict the CHF of water and a correlation for liquid metals are developed. First, a mechanistic model to predict the CHF in flow boiling at low quality was developed based on the liquid sublayer dryout mechanism. In this approach the CHF is assumed to occur when a vapor blanket isolates the liquid sublayer from bulk liquid and then the liquid entering the sublayer falls short of balancing the rate of sublayer dryout by vaporization. Therefore, the vapor blanket velocity is the key parameter. In this work the vapor blanket velocity is theoretically determined based on mass, energy, and momentum balance and finally the mechanistic model to predict the CHF in flow boiling at low quality is developed. The accuracy of the present model is evaluated by comparing model predictions with the experimental data and tabular data of look-up tables. The predictions of the present model agree well with extensive CHF data. In the latter part a correlation to predict the CHF for liquid metals is developed based on the flow excursion mechanism. By using Baroczy two-phase frictional pressure drop correlation and Ledinegg instability criterion, the relationship between the CHF of liquid metals and the principal parameters is derived and finally the

  2. The effect of virtual mass on the prediction of critical flow

    International Nuclear Information System (INIS)

    Cheng, L.; Lahey, R.T.; Drew, D.A.

    1983-01-01

    By observing the results in Fig. 4 and Fig. 5 we can see that virtual mass effects are important in predicting critical flow. However, as seen in Fig. 7a, in which all three flows are predicted to be critical (Δ=0), it is difficult to distinguish one set of conditions from the other by just considering the pressure profile. Clearly more detailed data, such as the throat void fraction, is needed for discrimination between these calculations. Moreover, since the calculated critical flows have been found to be sensitive to initial mass flux, and void fraction, careful measurements of those parameters are needed before accurate virtual mass parameters can be determined from these data. It can be concluded that the existing Moby Dick data is inadequate to allow one to deduce accurate values of the virtual mass parameters C/sub VM/ and λ. Nevertheless, more careful experiments of this type are uniquely suited for the determination of these important parameters. It appears that the use of a nine equation model, such as that discussed herein, coupled with more detailed accurate critical flow data is an effective means of determining the parameters in interfacial momentum transfer models, such as virtual mass effects, which are only important during strong spatial accelerations. Indeed, there are few other methods available which can be used for such determinations

  3. Predictions of the marviken subcooled critical mass flux using the critical flow scaling parameters

    Energy Technology Data Exchange (ETDEWEB)

    Park, Choon Kyung; Chun, Se Young; Cho, Seok; Yang, Sun Ku; Chung, Moon Ki [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A total of 386 critical flow data points from 19 runs of 27 runs in the Marviken Test were selected and compared with the predictions by the correlations based on the critical flow scaling parameters. The results show that the critical mass flux in the very large diameter pipe can be also characterized by two scaling parameters such as discharge coefficient and dimensionless subcooling (C{sub d,ref} and {Delta}{Tau}{sup *} {sub sub}). The agreement between the measured data and the predictions are excellent. 8 refs., 8 figs. 1 tab. (Author)

  4. Predictions of the marviken subcooled critical mass flux using the critical flow scaling parameters

    Energy Technology Data Exchange (ETDEWEB)

    Park, Choon Kyung; Chun, Se Young; Cho, Seok; Yang, Sun Ku; Chung, Moon Ki [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    A total of 386 critical flow data points from 19 runs of 27 runs in the Marviken Test were selected and compared with the predictions by the correlations based on the critical flow scaling parameters. The results show that the critical mass flux in the very large diameter pipe can be also characterized by two scaling parameters such as discharge coefficient and dimensionless subcooling (C{sub d,ref} and {Delta}{Tau}{sup *} {sub sub}). The agreement between the measured data and the predictions are excellent. 8 refs., 8 figs. 1 tab. (Author)

  5. Safety prediction for basic components of safety critical software based on static testing

    International Nuclear Information System (INIS)

    Son, H.S.; Seong, P.H.

    2001-01-01

    The purpose of this work is to develop a safety prediction method, with which we can predict the risk of software components based on static testing results at the early development stage. The predictive model combines the major factor with the quality factor for the components, both of which are calculated based on the measures proposed in this work. The application to a safety-critical software system demonstrates the feasibility of the safety prediction method. (authors)

  6. Prediction of critical heat flux by a new local condition hypothesis

    International Nuclear Information System (INIS)

    Im, J. H.; Jun, K. D.; Sim, J. W.; Deng, Zhijian

    1998-01-01

    Critical Heat Flux(CHF) was predicted for uniformly heated vertical round tube by a new local condition hypothesis which incorporates a local true steam quality. This model successfully overcame the difficulties in predicted the subcooled and quality CHF by the thermodynamic equilibrium quality. The local true steam quality is a dependent variable of the thermodynamic equilibrium quality at the exit and the quality at the Onset of Significant Vaporization(OSV). The exit thermodynamic equilibrium quality was obtained from the heat balance, and the quality at OSV was obtained from the Saha-Zuber correlation. In the past CHF has been predicted by the experimental correlation based on local or non-local condition hypothesis. This preliminary study showed that all the available world data on uniform CHF could be predicted by the model based on the local condition hypothesis

  7. Critical assessment of methods of protein structure prediction (CASP)-round IX

    KAUST Repository

    Moult, John; Fidelis, Krzysztof; Kryshtafovych, Andriy; Tramontano, Anna

    2011-01-01

    This article is an introduction to the special issue of the journal PROTEINS, dedicated to the ninth Critical Assessment of Structure Prediction (CASP) experiment to assess the state of the art in protein structure modeling. The article describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. Methods for modeling protein structure continue to advance, although at a more modest pace than in the early CASP experiments. CASP developments of note are indications of improvement in model accuracy for some classes of target, an improved ability to choose the most accurate of a set of generated models, and evidence of improvement in accuracy for short "new fold" models. In addition, a new analysis of regions of models not derivable from the most obvious template structure has revealed better performance than expected.

  8. Predicting critical heat flux in slug flow regime of uniformly heated ...

    African Journals Online (AJOL)

    Numerical computation code (PWR-DNBP) has been developed to predict Critical Heat Flux (CHF) of forced convective flow of water in a vertical heated channel. The code was based on the liquid sub-layer model, with the assumption that CHF occurred when the liquid film thickness between the heated surface and vapour ...

  9. Competition-induced criticality in a model of meme popularity.

    Science.gov (United States)

    Gleeson, James P; Ward, Jonathan A; O'Sullivan, Kevin P; Lee, William T

    2014-01-31

    Heavy-tailed distributions of meme popularity occur naturally in a model of meme diffusion on social networks. Competition between multiple memes for the limited resource of user attention is identified as the mechanism that poises the system at criticality. The popularity growth of each meme is described by a critical branching process, and asymptotic analysis predicts power-law distributions of popularity with very heavy tails (exponent α<2, unlike preferential-attachment models), similar to those seen in empirical data.

  10. Competition-Induced Criticality in a Model of Meme Popularity

    Science.gov (United States)

    Gleeson, James P.; Ward, Jonathan A.; O'Sullivan, Kevin P.; Lee, William T.

    2014-01-01

    Heavy-tailed distributions of meme popularity occur naturally in a model of meme diffusion on social networks. Competition between multiple memes for the limited resource of user attention is identified as the mechanism that poises the system at criticality. The popularity growth of each meme is described by a critical branching process, and asymptotic analysis predicts power-law distributions of popularity with very heavy tails (exponent α <2, unlike preferential-attachment models), similar to those seen in empirical data.

  11. Criticism and Counter-Criticism of Public Management: Strategy Models

    OpenAIRE

    Luis C. Ortigueira

    2007-01-01

    Critical control is very important in scientific management. This paper presents models of critical and counter-critical public-management strategies, focusing on the types of criticism and counter-criticism manifested in parliamentary political debates. The paper includes: (i) a normative model showing how rational criticism can be carried out; (ii) a normative model for oral critical intervention; and (iii) a general motivational strategy model for criticisms and counter-criticisms. The pap...

  12. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  13. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  14. A Dynamic Hydrology-Critical Zone Framework for Rainfall-triggered Landslide Hazard Prediction

    Science.gov (United States)

    Dialynas, Y. G.; Foufoula-Georgiou, E.; Dietrich, W. E.; Bras, R. L.

    2017-12-01

    Watershed-scale coupled hydrologic-stability models are still in their early stages, and are characterized by important limitations: (a) either they assume steady-state or quasi-dynamic watershed hydrology, or (b) they simulate landslide occurrence based on a simple one-dimensional stability criterion. Here we develop a three-dimensional landslide prediction framework, based on a coupled hydrologic-slope stability model and incorporation of the influence of deep critical zone processes (i.e., flow through weathered bedrock and exfiltration to the colluvium) for more accurate prediction of the timing, location, and extent of landslides. Specifically, a watershed-scale slope stability model that systematically accounts for the contribution of driving and resisting forces in three-dimensional hillslope segments was coupled with a spatially-explicit and physically-based hydrologic model. The landslide prediction framework considers critical zone processes and structure, and explicitly accounts for the spatial heterogeneity of surface and subsurface properties that control slope stability, including soil and weathered bedrock hydrological and mechanical characteristics, vegetation, and slope morphology. To test performance, the model was applied in landslide-prone sites in the US, the hydrology of which has been extensively studied. Results showed that both rainfall infiltration in the soil and groundwater exfiltration exert a strong control on the timing and magnitude of landslide occurrence. We demonstrate the extent to which three-dimensional slope destabilizing factors, which are modulated by dynamic hydrologic conditions in the soil-bedrock column, control landslide initiation at the watershed scale.

  15. Comprehensive and critical review of the predictive properties of the various mass models

    International Nuclear Information System (INIS)

    Haustein, P.E.

    1984-01-01

    Since the publication of the 1975 Mass Predictions approximately 300 new atomic masses have been reported. These data come from a variety of experimental studies using diverse techniques and they span a mass range from the lightest isotopes to the very heaviest. It is instructive to compare these data with the 1975 predictions and several others (Moeller and Nix, Monahan, Serduke, Uno and Yamada which appeared latter. Extensive numerical and graphical analyses have been performed to examine the quality of the mass predictions from the various models and to identify features in these models that require correction. In general, there is only rough correlation between the ability of a particular model to reproduce the measured mass surface which had been used to refine its adjustable parameters and that model's ability to predict correctly the new masses. For some models distinct systematic features appear when the new mass data are plotted as functions of relevant physical variables. Global intercomparisons of all the models are made first, followed by several examples of types of analysis performed with individual mass models

  16. Transport critical current density in flux creep model

    International Nuclear Information System (INIS)

    Wang, J.; Taylor, K.N.R.; Russell, G.J.; Yue, Y.

    1992-01-01

    The magnetic flux creep model has been used to derive the temperature dependence of the critical current density in high temperature superconductors. The generally positive curvature of the J c -T diagram is predicted in terms of two interdependent dimensionless fitting parameters. In this paper, the results are compared with both SIS and SNS junction models of these granular materials, neither of which provides a satisfactory prediction of the experimental data. A hybrid model combining the flux creep and SNS mechanisms is shown to be able to account for the linear regions of the J c -T behavior which are observed in some materials

  17. External validation of the Intensive Care National Audit & Research Centre (ICNARC) risk prediction model in critical care units in Scotland.

    Science.gov (United States)

    Harrison, David A; Lone, Nazir I; Haddow, Catriona; MacGillivray, Moranne; Khan, Angela; Cook, Brian; Rowan, Kathryn M

    2014-01-01

    Risk prediction models are used in critical care for risk stratification, summarising and communicating risk, supporting clinical decision-making and benchmarking performance. However, they require validation before they can be used with confidence, ideally using independently collected data from a different source to that used to develop the model. The aim of this study was to validate the Intensive Care National Audit & Research Centre (ICNARC) model using independently collected data from critical care units in Scotland. Data were extracted from the Scottish Intensive Care Society Audit Group (SICSAG) database for the years 2007 to 2009. Recoding and mapping of variables was performed, as required, to apply the ICNARC model (2009 recalibration) to the SICSAG data using standard computer algorithms. The performance of the ICNARC model was assessed for discrimination, calibration and overall fit and compared with that of the Acute Physiology And Chronic Health Evaluation (APACHE) II model. There were 29,626 admissions to 24 adult, general critical care units in Scotland between 1 January 2007 and 31 December 2009. After exclusions, 23,269 admissions were included in the analysis. The ICNARC model outperformed APACHE II on measures of discrimination (c index 0.848 versus 0.806), calibration (Hosmer-Lemeshow chi-squared statistic 18.8 versus 214) and overall fit (Brier's score 0.140 versus 0.157; Shapiro's R 0.652 versus 0.621). Model performance was consistent across the three years studied. The ICNARC model performed well when validated in an external population to that in which it was developed, using independently collected data.

  18. A state-of-the-art report on two-phase critical flow modelling

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jae Joon; Jang, Won Pyo; Kim, Dong Soo [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1993-09-01

    This report reviews and analyses two-phase, critical flow models. The purposes of the report are (1) to make a knowledge base for the full understanding and best-estimate of two-phase, critical flow, (2) to analyse the model development trend and to derive the direction of further studies. A wide range of critical flow models are reviewed. Each model, in general, predicts critical flow well only within specified conditions. The critical flow models of best-estimate codes are special process model included in the hydrodynamic model. The results of calculations depend on the nodalization, discharge coefficient, and other user`s options. The following topics are recommended for continuing studies: improvement of two-fluid model, development of multidimensional model, data base setup and model error evaluation, and generalization of discharge coefficients. 24 figs., 5 tabs., 80 refs. (Author).

  19. A state-of-the-art report on two-phase critical flow modelling

    International Nuclear Information System (INIS)

    Jung, Jae Joon; Jang, Won Pyo; Kim, Dong Soo

    1993-09-01

    This report reviews and analyses two-phase, critical flow models. The purposes of the report are (1) to make a knowledge base for the full understanding and best-estimate of two-phase, critical flow, (2) to analyse the model development trend and to derive the direction of further studies. A wide range of critical flow models are reviewed. Each model, in general, predicts critical flow well only within specified conditions. The critical flow models of best-estimate codes are special process model included in the hydrodynamic model. The results of calculations depend on the nodalization, discharge coefficient, and other user's options. The following topics are recommended for continuing studies: improvement of two-fluid model, development of multidimensional model, data base setup and model error evaluation, and generalization of discharge coefficients. 24 figs., 5 tabs., 80 refs. (Author)

  20. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  1. Critical manifold of the kagome-lattice Potts model

    International Nuclear Information System (INIS)

    Jacobsen, Jesper Lykke; Scullard, Christian R

    2012-01-01

    Any two-dimensional infinite regular lattice G can be produced by tiling the plane with a finite subgraph B⊆G; we call B a basis of G. We introduce a two-parameter graph polynomial P B (q, v) that depends on B and its embedding in G. The algebraic curve P B (q, v) = 0 is shown to provide an approximation to the critical manifold of the q-state Potts model, with coupling v = e K − 1, defined on G. This curve predicts the phase diagram not only in the physical ferromagnetic regime (v > 0), but also in the antiferromagnetic (v B (q, v) = 0 provides the exact critical manifold in the limit of infinite B. Furthermore, for some lattices G—or for the Ising model (q = 2) on any G—the polynomial P B (q, v) factorizes for any choice of B: the zero set of the recurrent factor then provides the exact critical manifold. In this sense, the computation of P B (q, v) can be used to detect exact solvability of the Potts model on G. We illustrate the method for two choices of G: the square lattice, where the Potts model has been exactly solved, and the kagome lattice, where it has not. For the square lattice we correctly reproduce the known phase diagram, including the antiferromagnetic transition and the singularities in the Berker–Kadanoff phase at certain Beraha numbers. For the kagome lattice, taking the smallest basis with six edges we recover a well-known (but now refuted) conjecture of F Y Wu. Larger bases provide successive improvements on this formula, giving a natural extension of Wu’s approach. We perform large-scale numerical computations for comparison and find excellent agreement with the polynomial predictions. For v > 0 the accuracy of the predicted critical coupling v c is of the order 10 −4 or 10 −5 for the six-edge basis, and improves to 10 −6 or 10 −7 for the largest basis studied (with 36 edges). This article is part of ‘Lattice models and integrability’, a special issue of Journal of Physics A: Mathematical and Theoretical in honour of

  2. Critically Tapered Wedges and Critical State Soil Mechanics: Porosity-based Pressure Prediction in the Nankai Accretionary Prism.

    Science.gov (United States)

    Flemings, P. B.; Saffer, D. M.

    2016-12-01

    We predict pore pressure from porosity measurements at ODP Sites 1174 and 808 in the Nankai Accretionary prism, offshore Japan. For a range of friction angles (5-30 degrees), we estimate that the pore pressure ratio (λ*) ranges from 0.5 to 0.8: the pore pressure supports 50% to 80% of the overburden. Higher friction angles result in higher pressures. For the majority of the scenarios, pressures within the prism parallel the lithostat and are greater than the pressures beneath it. Our results support previous qualitative interpretations at Nankai and elsewhere suggesting that lower porosity above the décollement than below reflects higher mean effective stress there. By coupling a critical state soil model (Modified Cam Clay), which describes porosity as a function of mean and deviator stress, with a stress model that considers the difference in stress states above and below the décollement, we quantitatively show that the prism porosities record significant overpressure despite their lower porosity. As the soil is consumed by the advancing prism, changes in both mean and shear stress drive overpressure generation. Even in the extreme case where only change in mean stress is considered (a vertical end cap model), significant overpressures are generated. The high pressures we predict require an effective friction coefficient (µb') at the décollement of 0.023-0.038. Assuming that the pore pressure at the décollement lies between the values we report for the wedge and the underthrusting sediments, these effective friction coefficients correspond to intrinsic friction coefficients of µb= 0.08-0.38 (f = 4.6 - 21°). These values are comparable to friction coefficients of 0.1-0.4 reported for clay-dominated fault zones in a wide range of settings. By coupling the critical wedge model with an appropriate constitutive model, we present a systematic approach to predict pressure in thrust systems.

  3. Prediction calculation of HTR-10 fuel loading for the first criticality

    International Nuclear Information System (INIS)

    Jing Xingqing; Yang Yongwei; Gu Yuxiang; Shan Wenzhi

    2001-01-01

    The 10 MW high temperature gas cooled reactor (HTR-10) was built at Institute of Nuclear Energy Technology, Tsinghua University, and the first criticality was attained in Dec. 2000. The high temperature gas cooled reactor physics simulation code VSOP was used for the prediction of the fuel loading for HTR-10 first criticality. The number of fuel element and graphite element was predicted to provide reference for the first criticality experiment. The prediction calculations toke into account the factors including the double heterogeneity of the fuel element, buckling feedback for the spectrum calculation, the effect of the mixture of the graphite and the fuel element, and the correction of the diffusion coefficients near the upper cavity based on the transport theory. The effects of impurities in the fuel and the graphite element in the core and those in the reflector graphite on the reactivity of the reactor were considered in detail. The first criticality experiment showed that the predicted values and the experiment results were in good agreement with little relative error less than 1%, which means the prediction was successful

  4. Critical-state model for the determination of critical currents in disk-shaped superconductors

    International Nuclear Information System (INIS)

    Frankel, D.J.

    1979-01-01

    A series of experiments has been carried out on the flux trapping and shielding capabilities of a flat strip of Nb-Ti/Cu composite material. A circular piece of material from the strip was tested in a uniform field directed perpendicularly to the surface of the sample. Profiles of the normal component of the field along the sample diameter were measured. The critical-state model was adapted for this geometry and proved capable of reproducing the measured field profiles. Model curves agreed well with experimental field profiles generated when the full sample was in the critical state, when only a portion of the sample was in the critical state, and when profiles were obtained after the direction of the rate change of the magnetic field was reversed. The adaption of the critical-state model to disk geometry provides a possible method either to derive values of the critical current from measurements of field profiles above thin flat samples, or to predict the trapping and shielding behavior of such samples if the critical current is already known. This method of determining critical currents does not require that samples be formed into narrow strips or wires, as is required for direct measurements of J/sub c/, or into tubes or cylinders, as is usually required for magnetization-type measurements. Only a relatively small approximately circular piece of material is needed. The method relies on induced currents, so there is no need to pass large currents into the sample. The field-profile measurements are easily performed with expensive Hall probes and do not require detection of the resistive transition of the superconductor

  5. Capacity Prediction Model Based on Limited Priority Gap-Acceptance Theory at Multilane Roundabouts

    Directory of Open Access Journals (Sweden)

    Zhaowei Qu

    2014-01-01

    Full Text Available Capacity is an important design parameter for roundabouts, and it is the premise of computing their delay and queue. Roundabout capacity has been studied for decades, and empirical regression model and gap-acceptance model are the two main methods to predict it. Based on gap-acceptance theory, by considering the effect of limited priority, especially the relationship between limited priority factor and critical gap, a modified model was built to predict the roundabout capacity. We then compare the results between Raff’s method and maximum likelihood estimation (MLE method, and the MLE method was used to predict the critical gaps. Finally, the predicted capacities from different models were compared, with the observed capacity by field surveys, which verifies the performance of the proposed model.

  6. The U(1)-Higgs model: critical behaviour in the confining-Higgs region

    International Nuclear Information System (INIS)

    Alonso, J.L.; Azcoiti, V.; Campos, I.; Ciria, J.C.; Cruz, A.; Iniguez, D.; Lesmes, F.; Piedrafita, C.; Rivero, A.; Tarancon, A.; Badoni, D.; Fernandez, L.A.; Munoz Sudupe, A.; Ruiz-Lorenzo, J.J.; Gonzalez-Arroyo, A.; Martinez, P.; Pech, J.; Tellez, P.

    1993-01-01

    We study numerically the critical properties of the U(1)-Higgs lattice model, with fixed Higgs modulus, in the region of small gauge coupling where the Higgs and confining phases merge. We find evidence for a first-order transition line that ends in a second-order point. By means of a rotation in parameter space we introduce thermodynamic magnitudes and critical exponents in close resemblance with simple models that show analogous critical behaviour. The measured data allow us to fit the critical exponents finding values in agreement with the mean-field prediction. The location of the critical point and the slope of the first-order line are accurately measured. (orig.)

  7. Critical assessment of methods of protein structure prediction (CASP) - round x

    KAUST Repository

    Moult, John; Fidelis, Krzysztof; Kryshtafovych, Andriy; Schwede, Torsten; Tramontano, Anna

    2013-01-01

    This article is an introduction to the special issue of the journal PROTEINS, dedicated to the tenth Critical Assessment of Structure Prediction (CASP) experiment to assess the state of the art in protein structure modeling. The article describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. The 10 CASP experiments span almost 20 years of progress in the field of protein structure modeling, and there have been enormous advances in methods and model accuracy in that period. Notable in this round is the first sustained improvement of models with refinement methods, using molecular dynamics. For the first time, we tested the ability of modeling methods to make use of sparse experimental three-dimensional contact information, such as may be obtained from new experimental techniques, with encouraging results. On the other hand, new contact prediction methods, though holding considerable promise, have yet to make an impact in CASP testing. The nature of CASP targets has been changing in recent CASPs, reflecting shifts in experimental structural biology, with more irregular structures, more multi-domain and multi-subunit structures, and less standard versions of known folds. When allowance is made for these factors, we continue to see steady progress in the overall accuracy of models, particularly resulting from improvement of non-template regions.

  8. Critical assessment of methods of protein structure prediction (CASP) - round x

    KAUST Repository

    Moult, John

    2013-12-17

    This article is an introduction to the special issue of the journal PROTEINS, dedicated to the tenth Critical Assessment of Structure Prediction (CASP) experiment to assess the state of the art in protein structure modeling. The article describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. The 10 CASP experiments span almost 20 years of progress in the field of protein structure modeling, and there have been enormous advances in methods and model accuracy in that period. Notable in this round is the first sustained improvement of models with refinement methods, using molecular dynamics. For the first time, we tested the ability of modeling methods to make use of sparse experimental three-dimensional contact information, such as may be obtained from new experimental techniques, with encouraging results. On the other hand, new contact prediction methods, though holding considerable promise, have yet to make an impact in CASP testing. The nature of CASP targets has been changing in recent CASPs, reflecting shifts in experimental structural biology, with more irregular structures, more multi-domain and multi-subunit structures, and less standard versions of known folds. When allowance is made for these factors, we continue to see steady progress in the overall accuracy of models, particularly resulting from improvement of non-template regions.

  9. Modeling and validation of a mechanistic tool (MEFISTO) for the prediction of critical power in BWR fuel assemblies

    International Nuclear Information System (INIS)

    Adamsson, Carl; Le Corre, Jean-Marie

    2011-01-01

    Highlights: → The MEFISTO code efficiently and accurately predicts the dryout event in a BWR fuel bundle, using a mechanistic model. → A hybrid approach between a fast and robust sub-channel analysis and a three-field two-phase analysis is adopted. → MEFISTO modeling approach, calibration, CPU usage, sensitivity, trend analysis and performance evaluation are presented. → The calibration parameters and process were carefully selected to preserve the mechanistic nature of the code. → The code dryout prediction performance is near the level of fuel-specific empirical dryout correlations. - Abstract: Westinghouse is currently developing the MEFISTO code with the main goal to achieve fast, robust, practical and reliable prediction of steady-state dryout Critical Power in Boiling Water Reactor (BWR) fuel bundle based on a mechanistic approach. A computationally efficient simulation scheme was used to achieve this goal, where the code resolves all relevant field (drop, steam and multi-film) mass balance equations, within the annular flow region, at the sub-channel level while relying on a fast and robust two-phase (liquid/steam) sub-channel solution to provide the cross-flow information. The MEFISTO code can hence provide highly detailed solution of the multi-film flow in BWR fuel bundle while enhancing flexibility and reducing the computer time by an order of magnitude as compared to a standard three-field sub-channel analysis approach. Models for the numerical computation of the one-dimensional field flowrate distributions in an open channel (e.g. a sub-channel), including the numerical treatment of field cross-flows, part-length rods, spacers grids and post-dryout conditions are presented in this paper. The MEFISTO code is then applied to dryout prediction in BWR fuel bundle using VIPRE-W as a fast and robust two-phase sub-channel driver code. The dryout power is numerically predicted by iterating on the bundle power so that the minimum film flowrate in the

  10. Prediction of critical thinking disposition based on mentoring among ...

    African Journals Online (AJOL)

    The results of study showed that there was a significantly positive correlation between Mentoring and Critical thinking disposition among faculty members. The findings showed that 67% of variance of critical thinking disposition was defined by predictive variables. The faculty members evaluated themselves in all mentoring ...

  11. Thermal hydraulic test for reactor safety system - Critical heat flux experiment and development of prediction models

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Soon Heung; Baek, Won Pil; Yang, Soo Hyung; No, Chang Hyun [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    2000-04-01

    To acquire CHF data through the experiments and develop prediction models, research was conducted. Final objectives of research are as follows: 1) Production of tube CHF data for low and middle pressure and mass flux and Flow Boiling Visualization. 2) Modification and suggestion of tube CHF prediction models. 3) Development of fuel bundle CHF prediction methodology base on tube CHF prediction models. The major results of research are as follows: 1) Production of the CHF data for low and middle pressure and mass flux. - Acquisition of CHF data (764) for low and middle pressure and flow conditions - Analysis of CHF trends based on the CHF data - Assessment of existing CHF prediction methods with the CHF data 2) Modification and suggestion of tube CHF prediction models. - Development of a unified CHF model applicable for a wide parametric range - Development of a threshold length correlation - Improvement of CHF look-up table using the threshold length correlation 3) Development of fuel bundle CHF prediction methodology base on tube CHF prediction models. - Development of bundle CHF prediction methodology using correction factor. 11 refs., 134 figs., 25 tabs. (Author)

  12. Critical phases in the raise and peel model

    Science.gov (United States)

    Jara, D. A. C.; Alcaraz, F. C.

    2018-05-01

    The raise and peel model (RPM) is a nonlocal stochastic model describing the space and time fluctuations of an evolving one dimensional interface. Its relevant parameter u is the ratio between the rates of local adsorption and nonlocal desorption processes (avalanches) The model at u  =  1 is the first example of a conformally invariant stochastic model. For small values u    u 0 it is critical. Although previous studies indicate that u 0  =  1, a determination of u 0 with a reasonable precision is still missing. By calculating numerically the structure function of the height profiles in the reciprocal space we confirm with good precision that indeed u 0  =  1. We establish that at the conformal invariant point u  =  1 the RPM has a roughening transition with dynamical and roughness critical exponents z  =  1 and , respectively. For u  >  1 the model is critical with a u-dependent dynamical critical exponent that tends towards zero as . However at 1/u  =  0 the RPM is exactly mapped into the totally asymmetric exclusion problem. This last model is known to be noncritical (critical) for open (periodic) boundary conditions. Our numerical studies indicate that the RPM as , due to its nonlocal dynamical processes, has the same large-distance physics no matter what boundary condition we chose. For u  >  1, our numerical analysis shows that in contrast to previous predictions, the region is composed of two distinct critical phases. For the height profiles are rough (), and for the height profiles are flat at large distances (). We also observed that in both critical phases (u  >  1) the RPM at short length scales, has an effective behavior in the Kardar–Parisi–Zhang critical universality class, that is not the true behavior of the system at large length scales.

  13. Evaluations of the CCFL and critical flow models in TRACE for PWR LBLOCA analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jung-Hua; Lin, Hao Tzu [National Tsing Hua Univ., HsinChu, Taiwan (China). Dept. of Engineering and System Science; Wang, Jong-Rong [Atomic Energy Council, Taoyuan County, Taiwan (China). Inst. of Nuclear Energy Research; Shih, Chunkuan [National Tsing Hua Univ., HsinChu, Taiwan (China). Inst. of Nuclear Engineering and Science

    2012-12-15

    This study aims to develop the Maanshan Pressurized Water Reactor (PWR) analysis model by using the TRACE (TRAC/RELAP Advanced Computational Engine) code. By analyzing the Large Break Loss of Coolant Accident (LBLOCA) sequence, the results are compared with the Maanshan Final Safety Analysis Report (FSAR) data. The critical flow and Counter Current Flow Limitation (CCFL) play an important role in the overall performance of TRACE LBLOCA prediction. Therefore, the sensitivity study on the discharge coefficients of critical flow model and CCFL modeling among different regions are also discussed. The current conclusions show that modeling CCFL in downcomer has more significant impact on the peak cladding temperature than modeling CCFL in hot-legs does. No CCFL phenomena occurred in the pressurizer surge line. The best value for the multipliers of critical flow model would be 0.5 and the TRACE could consistently predict the break flow rate in the LBLOCA analysis as shown in FSAR. (orig.)

  14. Using a Prediction Model to Manage Cyber Security Threats

    Directory of Open Access Journals (Sweden)

    Venkatesh Jaganathan

    2015-01-01

    Full Text Available Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  15. Using a Prediction Model to Manage Cyber Security Threats.

    Science.gov (United States)

    Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  16. Using a Prediction Model to Manage Cyber Security Threats

    Science.gov (United States)

    Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization. PMID:26065024

  17. Adverse Condition and Critical Event Prediction in Cranfield Multiphase Flow Facility

    DEFF Research Database (Denmark)

    Egedorf, Søren; Shaker, Hamid Reza

    2017-01-01

    , or even to the environment. To cope with these, adverse condition and critical event prediction plays an important role. Adverse Condition and Critical Event Prediction Toolbox (ACCEPT) is a tool which has been recently developed by NASA to allow for a timely prediction of an adverse event, with low false...... alarm and missed detection rates. While ACCEPT has shown to be an effective tool in some applications, its performance has not yet been evaluated on practical well-known benchmark examples. In this paper, ACCEPT is used for adverse condition and critical event prediction in a multiphase flow facility....... Cranfield multiphase flow facility is known to be an interesting benchmark which has been used to evaluate different methods from statistical process monitoring. In order to allow for the data from the flow facility to be used in ACCEPT, methods such as Kernel Density Estimation (KDE), PCA-and CVA...

  18. Consideration of critical heat flux margin prediction by subcooled or low quality critical heat flux correlations

    International Nuclear Information System (INIS)

    Hejzlar, P.; Todreas, N.E.

    1996-01-01

    The accurate prediction of the critical heat flux (CHF) margin which is a key design parameter in a variety of cooling and heating systems is of high importance. These margins are, for the low quality region, typically expressed in terms of critical heat flux ratios using the direct substitution method. Using a simple example of a heated tube, it is shown that CHF correlations of a certain type often used to predict CHF margins, expressed in this manner, may yield different results, strongly dependent on the correlation in use. It is argued that the application of the heat balance method to such correlations, which leads to expressing the CHF margins in terms of the critical power ratio, may be more appropriate. (orig.)

  19. Numerical prediction of critical heat flux in nuclear fuel rod bundles with advanced three-fluid multidimensional porous media based model

    International Nuclear Information System (INIS)

    Zoran Stosic; Vladimir Stevanovic

    2005-01-01

    Full text of publication follows: The modern design of nuclear fuel rod bundles for Boiling Water Reactors (BWRs) is characterised with increased number of rods in the bundle, introduced part-length fuel rods and a water channel positioned along the bundle asymmetrically in regard to the centre of the bundle cross section. Such design causes significant spatial differences of volumetric heat flux, steam void fraction distribution, mass flux rate and other thermal-hydraulic parameters important for efficient cooling of nuclear fuel rods during normal steady-state and transient conditions. The prediction of the Critical Heat Flux (CHF) under these complex thermal-hydraulic conditions is of the prime importance for the safe and economic BWR operation. An efficient numerical method for the CHF prediction is developed based on the porous medium concept and multi-fluid two-phase flow models. Fuel rod bundle is observed as a porous medium with a two-phase flow through it. Coolant flow from the bundle entrance to the exit is characterised with the subsequent change of one-phase and several two-phase flow patterns. One fluid (one-phase) model is used for the prediction of liquid heating up in the bundle entrance region. Two-fluid modelling approach is applied to the bubbly and churn-turbulent vapour and liquid flows. Three-fluid modelling approach is applied to the annular flow pattern: liquid film on the rods wall, steam flow and droplets entrained in the steam stream. Every fluid stream in applied multi-fluid models is described with the mass, momentum and energy balance equations. Closure laws for the prediction of interfacial transfer processes are stated with the special emphasis on the prediction of the steam-water interface drag force, through the interface drag coefficient, and droplets entrainment and deposition rates for three-fluid annular flow model. The model implies non-equilibrium thermal and flow conditions. A new mechanistic approach for the CHF prediction

  20. Saturated properties prediction in critical region by a quartic equation of state

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2011-08-01

    Full Text Available A diverse substance library containing extensive PVT data for 77 pure components was used to critically evaluate the performance of a quartic equation of state and other four famous cubic equations of state in critical region. The quartic EOS studied in this work was found to significantly superior to the others in both vapor pressure prediction and saturated volume prediction in vicinity of critical point.

  1. Constructing an everywhere and locally relevant predictive model of the West-African critical zone

    Science.gov (United States)

    Hector, B.; Cohard, J. M.; Pellarin, T.; Maxwell, R. M.; Cappelaere, B.; Demarty, J.; Grippa, M.; Kergoat, L.; Lebel, T.; Mamadou, O.; Mougin, E.; Panthou, G.; Peugeot, C.; Vandervaere, J. P.; Vischel, T.; Vouillamoz, J. M.

    2017-12-01

    Considering water resources and hydrologic hazards, West Africa is among the most vulnerable regions to face both climatic (e.g. with the observed intensification of precipitation) and anthropogenic changes. With +3% of demographic rate, the region experiences rapid land use changes and increased pressure on surface and groundwater resources with observed consequences on the hydrological cycle (water table rise result of the sahelian paradox, increase in flood occurrence, etc.) Managing large hydrosystems (such as transboundary aquifers or rivers basins as the Niger river) requires anticipation of such changes. However, the region significantly lacks observations, for constructing and validating critical zone (CZ) models able to predict future hydrologic regime, but also comprises hydrosystems which encompass strong environmental gradients (e.g. geological, climatic, ecological) with highly different dominating hydrological processes. We address these issues by constructing a high resolution (1 km²) regional scale physically-based model using ParFlow-CLM which allows modeling a wide range of processes without prior knowledge on their relative dominance. Our approach combines multiple scale modeling from local to meso and regional scales within the same theoretical framework. Local and meso-scale models are evaluated thanks to the rich AMMA-CATCH CZ observation database which covers 3 supersites with contrasted environments in Benin (Lat.: 9.8°N), Niger (Lat.: 13.3°N) and Mali (Lat.: 15.3°N). At the regional scale the lack of relevant map of soil hydrodynamic parameters is addressed using remote sensing data assimilation. Our first results show the model's ability to reproduce the known dominant hydrological processes (runoff generation, ET, groundwater recharge…) across the major West-African regions and allow us to conduct virtual experiments to explore the impact of global changes on the hydrosystems. This approach is a first step toward the construction of

  2. Evaluation of Accuracy of Calculational Prediction of Criticality Based on ICSBEP Handbook Experiments

    International Nuclear Information System (INIS)

    Golovko, Yury; Rozhikhin, Yevgeniy; Tsibulya, Anatoly; Koscheev, Vladimir

    2008-01-01

    Experiments with plutonium, low enriched uranium and uranium-233 from the ICSBEP Handbook are being considered in this paper. Among these experiments it was selected only those, which seem to be the most relevant to the evaluation of uncertainty of critical mass of mixtures of plutonium or low enriched uranium or uranium-233 with light water. All selected experiments were examined and covariance matrices of criticality uncertainties were developed along with some uncertainties were revised. Statistical analysis of these experiments was performed and some contradictions were discovered and eliminated. Evaluation of accuracy of prediction of criticality calculations was performed using the internally consistent set of experiments with plutonium, low enriched uranium and uranium-233 remained after the statistical analyses. The application objects for the evaluation of calculational prediction of criticality were water-reflected spherical systems of homogeneous aqueous mixtures of plutonium or low enriched uranium or uranium-233 of different concentrations which are simplified models of apparatus of external fuel cycle. It is shows that the procedure allows to considerably reduce uncertainty in k eff caused by the uncertainties in neutron cross-sections. Also it is shows that the results are practically independent of initial covariance matrices of nuclear data uncertainties. (authors)

  3. Interpreting Disruption Prediction Models to Improve Plasma Control

    Science.gov (United States)

    Parsons, Matthew

    2017-10-01

    In order for the tokamak to be a feasible design for a fusion reactor, it is necessary to minimize damage to the machine caused by plasma disruptions. Accurately predicting disruptions is a critical capability for triggering any mitigative actions, and a modest amount of attention has been given to efforts that employ machine learning techniques to make these predictions. By monitoring diagnostic signals during a discharge, such predictive models look for signs that the plasma is about to disrupt. Typically these predictive models are interpreted simply to give a `yes' or `no' response as to whether a disruption is approaching. However, it is possible to extract further information from these models to indicate which input signals are more strongly correlated with the plasma approaching a disruption. If highly accurate predictive models can be developed, this information could be used in plasma control schemes to make better decisions about disruption avoidance. This work was supported by a Grant from the 2016-2017 Fulbright U.S. Student Program, administered by the Franco-American Fulbright Commission in France.

  4. Assessment of ASSERT-PV for prediction of critical heat flux in CANDU bundles

    International Nuclear Information System (INIS)

    Rao, Y.F.; Cheng, Z.; Waddington, G.M.

    2014-01-01

    Highlights: • Assessment of the new Canadian subchannel code ASSERT-PV 3.2 for CHF prediction. • CANDU 28-, 37- and 43-element bundle CHF experiments. • Prediction improvement of ASSERT-PV 3.2 over previous code versions. • Sensitivity study of the effect of CHF model options. - Abstract: Atomic Energy of Canada Limited (AECL) has developed the subchannel thermalhydraulics code ASSERT-PV for the Canadian nuclear industry. The recently released ASSERT-PV 3.2 provides enhanced models for improved predictions of flow distribution, critical heat flux (CHF), and post-dryout (PDO) heat transfer in horizontal CANDU fuel channels. This paper presents results of an assessment of the new code version against five full-scale CANDU bundle experiments conducted in 1990s and in 2009 by Stern Laboratories (SL), using 28-, 37- and 43-element (CANFLEX) bundles. A total of 15 CHF test series with varying pressure-tube creep and/or bearing-pad height were analyzed. The SL experiments encompassed the bundle geometries and range of flow conditions for the intended ASSERT-PV applications for CANDU reactors. Code predictions of channel dryout power and axial and radial CHF locations were compared against measurements from the SL CHF tests to quantify the code prediction accuracy. The prediction statistics using the recommended model set of ASSERT-PV 3.2 were compared to those from previous code versions. Furthermore, the sensitivity studies evaluated the contribution of each CHF model change or enhancement to the improvement in CHF prediction. Overall, the assessment demonstrated significant improvement in prediction of channel dryout power and axial and radial CHF locations in horizontal fuel channels containing CANDU bundles

  5. Prediction of safety critical software operational reliability from test reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1999-01-01

    It has been a critical issue to predict the safety critical software reliability in nuclear engineering area. For many years, many researches have focused on the quantification of software reliability and there have been many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. User's interest is however on the operational reliability rather than on the test reliability. The experiences show that the operational reliability is higher than the test reliability. With the assumption that the difference in reliability results from the change of environment, from testing to operation, testing environment factors comprising the aging factor and the coverage factor are developed in this paper and used to predict the ultimate operational reliability with the failure data in testing phase. It is by incorporating test environments applied beyond the operational profile into testing environment factors. The application results show that the proposed method can estimate the operational reliability accurately. (Author). 14 refs., 1 tab., 1 fig

  6. Homogeneous non-equilibrium two-phase critical flow model

    International Nuclear Information System (INIS)

    Schroeder, J.J.; Vuxuan, N.

    1987-01-01

    An important aspect of nuclear and chemical reactor safety is the ability to predict the maximum or critical mass flow rate from a break or leak in a pipe system. At the beginning of such a blowdown, if the stagnation condition of the fluid is subcooled or slightly saturated thermodynamic non-equilibrium exists in the downstream, e.g. the fluid becomes superheated to a degree determined by the liquid pressure. A simplified non-equilibrium model, explained in this report, is valid for rapidly decreasing pressure along the flow path. It presumes that fluid has to be superheated by an amount governed by physical principles before it starts to flash into steam. The flow is assumed to be homogeneous, i.e. the steam and liquid velocities are equal. An adiabatic flow calculation mode (Fanno lines) is employed to evaluate the critical flow rate for long pipes. The model is found to satisfactorily describe critical flow tests. Good agreement is obtained with the large scale Marviken tests as well as with small scale experiments. (orig.)

  7. Critical Source Area Delineation: The representation of hydrology in effective erosion modeling.

    Science.gov (United States)

    Fowler, A.; Boll, J.; Brooks, E. S.; Boylan, R. D.

    2017-12-01

    Despite decades of conservation and millions of conservation dollars, nonpoint source sediment loading associated with agricultural disturbance continues to be a significant problem in many parts of the world. Local and national conservation organizations are interested in targeting critical source areas for control strategy implementation. Currently, conservation practices are selected and located based on the Revised Universal Soil Loss Equation (RUSLE) hillslope erosion modeling, and the National Resource Conservation Service will soon be transiting to the Watershed Erosion Predict Project (WEPP) model for the same purpose. We present an assessment of critical source areas targeted with RUSLE, WEPP and a regionally validated hydrology model, the Soil Moisture Routing (SMR) model, to compare the location of critical areas for sediment loading and the effectiveness of control strategies. The three models are compared for the Palouse dryland cropping region of the inland northwest, with un-calibrated analyses of the Kamiache watershed using publicly available soils, land-use and long-term simulated climate data. Critical source areas were mapped and the side-by-side comparison exposes the differences in the location and timing of runoff and erosion predictions. RUSLE results appear most sensitive to slope driving processes associated with infiltration excess. SMR captured saturation excess driven runoff events located at the toe slope position, while WEPP was able to capture both infiltration excess and saturation excess processes depending on soil type and management. A methodology is presented for down-scaling basin level screening to the hillslope management scale for local control strategies. Information on the location of runoff and erosion, driven by the runoff mechanism, is critical for effective treatment and conservation.

  8. Prediction Approach of Critical Node Based on Multiple Attribute Decision Making for Opportunistic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Qifan Chen

    2016-01-01

    Full Text Available Predicting critical nodes of Opportunistic Sensor Network (OSN can help us not only to improve network performance but also to decrease the cost in network maintenance. However, existing ways of predicting critical nodes in static network are not suitable for OSN. In this paper, the conceptions of critical nodes, region contribution, and cut-vertex in multiregion OSN are defined. We propose an approach to predict critical node for OSN, which is based on multiple attribute decision making (MADM. It takes RC to present the dependence of regions on Ferry nodes. TOPSIS algorithm is employed to find out Ferry node with maximum comprehensive contribution, which is a critical node. The experimental results show that, in different scenarios, this approach can predict the critical nodes of OSN better.

  9. Predicting field weed emergence with empirical models and soft computing techniques

    Science.gov (United States)

    Seedling emergence is the most important phenological process that influences the success of weed species; therefore, predicting weed emergence timing plays a critical role in scheduling weed management measures. Important efforts have been made in the attempt to develop models to predict seedling e...

  10. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  11. Durability and life prediction modeling in polyimide composites

    Science.gov (United States)

    Binienda, Wieslaw K.

    1995-01-01

    Sudden appearance of cracks on a macroscopically smooth surface of brittle materials due to cooling or drying shrinkage is a phenomenon related to many engineering problems. Although conventional strength theories can be used to predict the necessary condition for crack appearance, they are unable to predict crack spacing and depth. On the other hand, fracture mechanics theory can only study the behavior of existing cracks. The theory of crack initiation can be summarized into three conditions, which is a combination of a strength criterion and laws of energy conservation, the average crack spacing and depth can thus be determined. The problem of crack initiation from the surface of an elastic half plane is solved and compares quite well with available experimental evidence. The theory of crack initiation is also applied to concrete pavements. The influence of cracking is modeled by the additional compliance according to Okamura's method. The theoretical prediction by this structural mechanics type of model correlates very well with the field observation. The model may serve as a theoretical foundation for future pavement joint design. The initiation of interactive cracks of quasi-brittle material is studied based on a theory of cohesive crack model. These cracks may grow simultaneously, or some of them may close during certain stages. The concept of crack unloading of cohesive crack model is proposed. The critical behavior (crack bifurcation, maximum loads) of the cohesive crack model are characterized by rate equations. The post-critical behavior of crack initiation is also studied.

  12. Prediction of critical illness in elderly outpatients using elder risk assessment: a population-based study

    Directory of Open Access Journals (Sweden)

    Biehl M

    2016-06-01

    receiver operating characteristic curve was 0.75, which indicated good discrimination. Conclusion: A simple model based on easily obtainable administrative data predicted critical illness in the next 2 years in elderly outpatients with up to 14% of the highest risk population suffering from critical illness. This model can facilitate efficient enrollment of patients into clinical programs such as care transition programs and studies aimed at the prevention of critical illness. It also can serve as a reminder to initiate advance care planning for high-risk elderly patients. External validation of this tool in different populations may enhance its generalizability. Keywords: aged, prognostication, critical care, mortality, elder risk assessment

  13. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  14. Driver's mental workload prediction model based on physiological indices.

    Science.gov (United States)

    Yan, Shengyuan; Tran, Cong Chi; Wei, Yingying; Habiyaremye, Jean Luc

    2017-09-15

    Developing an early warning model to predict the driver's mental workload (MWL) is critical and helpful, especially for new or less experienced drivers. The present study aims to investigate the correlation between new drivers' MWL and their work performance, regarding the number of errors. Additionally, the group method of data handling is used to establish the driver's MWL predictive model based on subjective rating (NASA task load index [NASA-TLX]) and six physiological indices. The results indicate that the NASA-TLX and the number of errors are positively correlated, and the predictive model shows the validity of the proposed model with an R 2 value of 0.745. The proposed model is expected to provide a reference value for the new drivers of their MWL by providing the physiological indices, and the driving lesson plans can be proposed to sustain an appropriate MWL as well as improve the driver's work performance.

  15. Pulsatile fluidic pump demonstration and predictive model application

    International Nuclear Information System (INIS)

    Morgan, J.G.; Holland, W.D.

    1986-04-01

    Pulsatile fluidic pumps were developed as a remotely controlled method of transferring or mixing feed solutions. A test in the Integrated Equipment Test facility demonstrated the performance of a critically safe geometry pump suitable for use in a 0.1-ton/d heavy metal (HM) fuel reprocessing plant. A predictive model was developed to calculate output flows under a wide range of external system conditions. Predictive and experimental flow rates are compared for both submerged and unsubmerged fluidic pump cases

  16. Critical Issues in Modelling Lymph Node Physiology

    Directory of Open Access Journals (Sweden)

    Dmitry Grebennikov

    2016-12-01

    Full Text Available In this study, we discuss critical issues in modelling the structure and function of lymph nodes (LNs, with emphasis on how LN physiology is related to its multi-scale structural organization. In addition to macroscopic domains such as B-cell follicles and the T cell zone, there are vascular networks which play a key role in the delivery of information to the inner parts of the LN, i.e., the conduit and blood microvascular networks. We propose object-oriented computational algorithms to model the 3D geometry of the fibroblastic reticular cell (FRC network and the microvasculature. Assuming that a conduit cylinder is densely packed with collagen fibers, the computational flow study predicted that the diffusion should be a dominating process in mass transport than convective flow. The geometry models are used to analyze the lymph flow properties through the conduit network in unperturbed- and damaged states of the LN. The analysis predicts that elimination of up to 60%–90% of edges is required to stop the lymph flux. This result suggests a high degree of functional robustness of the network.

  17. Comparison of the models of financial distress prediction

    Directory of Open Access Journals (Sweden)

    Jiří Omelka

    2013-01-01

    Full Text Available Prediction of the financial distress is generally supposed as approximation if a business entity is closed on bankruptcy or at least on serious financial problems. Financial distress is defined as such a situation when a company is not able to satisfy its liabilities in any forms, or when its liabilities are higher than its assets. Classification of financial situation of business entities represents a multidisciplinary scientific issue that uses not only the economic theoretical bases but interacts to the statistical, respectively to econometric approaches as well.The first models of financial distress prediction have originated in the sixties of the 20th century. One of the most known is the Altman’s model followed by a range of others which are constructed on more or less conformable bases. In many existing models it is possible to find common elements which could be marked as elementary indicators of potential financial distress of a company. The objective of this article is, based on the comparison of existing models of prediction of financial distress, to define the set of basic indicators of company’s financial distress at conjoined identification of their critical aspects. The sample defined this way will be a background for future research focused on determination of one-dimensional model of financial distress prediction which would subsequently become a basis for construction of multi-dimensional prediction model.

  18. A new mechanistic model of critical heat flux in forced-convection subcooled boiling

    International Nuclear Information System (INIS)

    Alajbegovic, A.; Kurul, N.; Podowski, M.Z.; Drew, D.A.; Lahey, R.T. Jr.

    1997-10-01

    Because of its practical importance and various industrial applications, the process of subcooled flow boiling has attracted a lot of attention in the research community in the past. However, the existing models are primarily phenomenological and are based on correlating experimental data rather than on a first-principle analysis of the governing physical phenomena. Even though the mechanisms leading to critical heat flux (CHF) are very complex, the recent progress in the understanding of local phenomena of multiphase flow and heat transfer, combined with the development of mathematical models and advanced Computational Fluid Dynamics (CFD) methods, makes analytical predictions of CHF quite feasible. Various mechanisms leading to CHF in subcooled boiling have been investigated. A new model for the predictions of the onset of CHF has been developed. This new model has been coupled with the overall boiling channel model, numerically implemented in the CFX 4 computer code, tested and validated against the experimental data of Hino and Ueda. The predicted critical heat flux for various channel operating conditions shows good agreement with the measurements using the aforementioned closure laws for the various local phenomena governing nucleation and bubble departure from the wall. The observed differences are consistent with typical uncertainties associated with CHF data

  19. Analytical prediction of CHF by FIDAS code based on three-fluid and film-dryout model

    International Nuclear Information System (INIS)

    Sugawara, Satoru

    1990-01-01

    Analytical prediction model of critical heat flux (CHF) has been developed on the basis of film dryout criterion due to droplets deposition and entrainment in annular mist flow. Critical heat flux in round tubes were analyzed by the Film Dryout Analysis Code in Subchannels (FIDAS) which is based on the three-fluid, three-field and newly developed film dryout model. Predictions by FIDAS were compared with the world-wide experimental data on CHF obtained in water and Freon for uniformly and non-uniformly heated tubes under vertical upward flow condition. Furthermore, CHF prediction capability of FIDAS was compared with those of other film dryout models for annular flow and Katto's CHF correlation. The predictions of FIDAS are in sufficient agreement with the experimental CHF data, and indicate better agreement than the other film dryout models and empirical correlation of Katto. (author)

  20. Risk Prediction Models for Incident Heart Failure: A Systematic Review of Methodology and Model Performance.

    Science.gov (United States)

    Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M

    2017-09-01

    Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. A hybrid model to predict the onset of gas entrainment with surface tension effects

    International Nuclear Information System (INIS)

    Saleh, W.; Bowden, R.C.; Hassan, I.G.; Kadem, L.

    2008-01-01

    The onset of gas entrainment, in a single downward oriented discharge from a stratified gas-liquid region with was modeled. The assumptions made in the development of the model reduced the problem to that of a potential flow. The discharge was modeled as a point-sink. Through use of the Kelvin-Laplace equation the model included the effects of surface tension. The resulting model required further knowledge of the flow field, specifically the dip radius of curvature prior to the onset of gas entrainment. The dip shape and size was investigated experimentally and correlations were provided to characterize the dip in terms of the discharge Froude number. The experimental correlation was used in conjunction with the theoretical model to predict the critical height. The results showed that by including surface tension effects the predicted critical height showed excellent agreement with experimental data. Surface tension reduces the critical height through the Bond number

  2. Prediction models in in vitro fertilization; where are we? A mini review

    Directory of Open Access Journals (Sweden)

    Laura van Loendersloot

    2014-05-01

    Full Text Available Since the introduction of in vitro fertilization (IVF in 1978, over five million babies have been born worldwide using IVF. Contrary to the perception of many, IVF does not guarantee success. Almost 50% of couples that start IVF will remain childless, even if they undergo multiple IVF cycles. The decision to start or pursue with IVF is challenging due to the high cost, the burden of the treatment, and the uncertain outcome. In optimal counseling on chances of a pregnancy with IVF, prediction models may play a role, since doctors are not able to correctly predict pregnancy chances. There are three phases of prediction model development: model derivation, model validation, and impact analysis. This review provides an overview on predictive factors in IVF, the available prediction models in IVF and provides key principles that can be used to critically appraise the literature on prediction models in IVF. We will address these points by the three phases of model development.

  3. Prediction of chronic critical illness in a general intensive care unit

    Directory of Open Access Journals (Sweden)

    Sérgio H. Loss

    2013-06-01

    Full Text Available OBJECTIVE: To assess the incidence, costs, and mortality associated with chronic critical illness (CCI, and to identify clinical predictors of CCI in a general intensive care unit. METHODS: This was a prospective observational cohort study. All patients receiving supportive treatment for over 20 days were considered chronically critically ill and eligible for the study. After applying the exclusion criteria, 453 patients were analyzed. RESULTS: There was an 11% incidence of CCI. Total length of hospital stay, costs, and mortality were significantly higher among patients with CCI. Mechanical ventilation, sepsis, Glasgow score < 15, inadequate calorie intake, and higher body mass index were independent predictors for cci in the multivariate logistic regression model. CONCLUSIONS: CCI affects a distinctive population in intensive care units with higher mortality, costs, and prolonged hospitalization. Factors identifiable at the time of admission or during the first week in the intensive care unit can be used to predict CCI.

  4. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Directory of Open Access Journals (Sweden)

    Sven Van Poucke

    Full Text Available With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension. Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM, the ETL process (Extract, Transform, Load was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  5. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  6. Surface tensions of multi-component mixed inorganic/organic aqueous systems of atmospheric significance: measurements, model predictions and importance for cloud activation predictions

    Directory of Open Access Journals (Sweden)

    D. O. Topping

    2007-01-01

    , it would appear that in order to model multi-component surface tensions involving compounds used in this study one requires the use of appropriate binary data. However, results indicate that the use of theoretical frameworks which contain parameters derived from binary data may predict unphysical behaviour when taken beyond the concentration ranges used to fit such parameters. The effect of deviations between predicted and measured surface tensions on predicted critical saturation ratios was quantified, by incorporating the surface tension models into an existing thermodynamic framework whilst firstly neglecting bulk to surface partitioning. Critical saturation ratios as a function of dry size for all of the multi-component systems were computed and it was found that deviations between predictions increased with decreasing particle dry size. As expected, use of the surface tension of pure water, rather than calculate the influence of the solutes explicitly, led to a consistently higher value of the critical saturation ratio indicating that neglect of the compositional effects will lead to significant differences in predicted activation behaviour even at large particle dry sizes. Following this two case studies were used to study the possible effect of bulk to surface partitioning on critical saturation ratios. By employing various assumptions it was possible to perform calculations not only for a binary system but also for a mixed organic system. In both cases this effect lead to a significant increase in the predicted critical supersaturation ratio compared to the above treatment. Further analysis of this effect will form the focus of future work.

  7. An improved mechanistic critical heat flux model for subcooled flow boiling

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Chang, Soon Heung [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    Based on the bubble coalescence adjacent to the heated wall as a flow structure for CHF condition, Chang and Lee developed a mechanistic critical heat flux (CHF) model for subcooled flow boiling. In this paper, improvements of Chang-Lee model are implemented with more solid theoretical bases for subcooled and low-quality flow boiling in tubes. Nedderman-Shearer`s equations for the skin friction factor and universal velocity profile models are employed. Slip effect of movable bubbly layer is implemented to improve the predictability of low mass flow. Also, mechanistic subcooled flow boiling model is used to predict the flow quality and void fraction. The performance of the present model is verified using the KAIST CHF database of water in uniformly heated tubes. It is found that the present model can give a satisfactory agreement with experimental data within less than 9% RMS error. 9 refs., 5 figs. (Author)

  8. An improved mechanistic critical heat flux model for subcooled flow boiling

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Chang, Soon Heung [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    Based on the bubble coalescence adjacent to the heated wall as a flow structure for CHF condition, Chang and Lee developed a mechanistic critical heat flux (CHF) model for subcooled flow boiling. In this paper, improvements of Chang-Lee model are implemented with more solid theoretical bases for subcooled and low-quality flow boiling in tubes. Nedderman-Shearer`s equations for the skin friction factor and universal velocity profile models are employed. Slip effect of movable bubbly layer is implemented to improve the predictability of low mass flow. Also, mechanistic subcooled flow boiling model is used to predict the flow quality and void fraction. The performance of the present model is verified using the KAIST CHF database of water in uniformly heated tubes. It is found that the present model can give a satisfactory agreement with experimental data within less than 9% RMS error. 9 refs., 5 figs. (Author)

  9. Predictive information processing in music cognition. A critical review.

    Science.gov (United States)

    Rohrmeier, Martin A; Koelsch, Stefan

    2012-02-01

    Expectation and prediction constitute central mechanisms in the perception and cognition of music, which have been explored in theoretical and empirical accounts. We review the scope and limits of theoretical accounts of musical prediction with respect to feature-based and temporal prediction. While the concept of prediction is unproblematic for basic single-stream features such as melody, it is not straight-forward for polyphonic structures or higher-order features such as formal predictions. Behavioural results based on explicit and implicit (priming) paradigms provide evidence of priming in various domains that may reflect predictive behaviour. Computational learning models, including symbolic (fragment-based), probabilistic/graphical, or connectionist approaches, provide well-specified predictive models of specific features and feature combinations. While models match some experimental results, full-fledged music prediction cannot yet be modelled. Neuroscientific results regarding the early right-anterior negativity (ERAN) and mismatch negativity (MMN) reflect expectancy violations on different levels of processing complexity, and provide some neural evidence for different predictive mechanisms. At present, the combinations of neural and computational modelling methodologies are at early stages and require further research. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model

  11. Spherical and cylindrical cavity expansion models based prediction of penetration depths of concrete targets.

    Directory of Open Access Journals (Sweden)

    Xiaochao Jin

    Full Text Available The cavity expansion theory is most widely used to predict the depth of penetration of concrete targets. The main purpose of this work is to clarify the differences between the spherical and cylindrical cavity expansion models and their scope of application in predicting the penetration depths of concrete targets. The factors that influence the dynamic cavity expansion process of concrete materials were first examined. Based on numerical results, the relationship between expansion pressure and velocity was established. Then the parameters in the Forrestal's formula were fitted to have a convenient and effective prediction of the penetration depth. Results showed that both the spherical and cylindrical cavity expansion models can accurately predict the depth of penetration when the initial velocity is lower than 800 m/s. However, the prediction accuracy decreases with the increasing of the initial velocity and diameters of the projectiles. Based on our results, it can be concluded that when the initial velocity is higher than the critical velocity, the cylindrical cavity expansion model performs better than the spherical cavity expansion model in predicting the penetration depth, while when the initial velocity is lower than the critical velocity the conclusion is quite the contrary. This work provides a basic principle for selecting the spherical or cylindrical cavity expansion model to predict the penetration depth of concrete targets.

  12. A critical review of principal traffic noise models: Strategies and implications

    Energy Technology Data Exchange (ETDEWEB)

    Garg, Naveen, E-mail: ngarg@mail.nplindia.ernet.in [Apex Level Standards and Industrial Metrology Division, CSIR-National Physical Laboratory, New Delhi 110012 (India); Department of Mechanical, Production and Industrial Engineering, Delhi Technological University, Delhi 110042 (India); Maji, Sagar [Department of Mechanical, Production and Industrial Engineering, Delhi Technological University, Delhi 110042 (India)

    2014-04-01

    The paper presents an exhaustive comparison of principal traffic noise models adopted in recent years in developed nations. The comparison is drawn on the basis of technical attributes including source modelling and sound propagation algorithms. Although the characterization of source in terms of rolling and propulsion noise in conjunction with advanced numerical methods for sound propagation has significantly reduced the uncertainty in traffic noise predictions, the approach followed is quite complex and requires specialized mathematical skills for predictions which is sometimes quite cumbersome for town planners. Also, it is sometimes difficult to follow the best approach when a variety of solutions have been proposed. This paper critically reviews all these aspects pertaining to the recent models developed and adapted in some countries and also discusses the strategies followed and implications of these models. - Highlights: • Principal traffic noise models developed are reviewed. • Sound propagation algorithms used in traffic noise models are compared. • Implications of models are discussed.

  13. Prediction of the critical heat flux for saturated upward flow boiling water in vertical narrow rectangular channels

    International Nuclear Information System (INIS)

    Choi, Gil Sik; Chang, Soon Heung; Jeong, Yong Hoon

    2016-01-01

    A study, on the theoretical method to predict the critical heat flux (CHF) of saturated upward flow boiling water in vertical narrow rectangular channels, has been conducted. For the assessment of this CHF prediction method, 608 experimental data were selected from the previous researches, in which the heated sections were uniformly heated from both wide surfaces under the high pressure condition over 41 bar. For this purpose, representative previous liquid film dryout (LFD) models for circular channels were reviewed by using 6058 points from the KAIST CHF data bank. This shows that it is reasonable to define the initial condition of quality and entrainment fraction at onset of annular flow (OAF) as the transition to annular flow regime and the equilibrium value, respectively, and the prediction error of the LFD model is dependent on the accuracy of the constitutive equations of droplet deposition and entrainment. In the modified Levy model, the CHF data are predicted with standard deviation (SD) of 14.0% and root mean square error (RMSE) of 14.1%. Meanwhile, in the present LFD model, which is based on the constitutive equations developed by Okawa et al., the entire data are calculated with SD of 17.1% and RMSE of 17.3%. Because of its qualitative prediction trend and universal calculation convergence, the present model was finally selected as the best LFD model to predict the CHF for narrow rectangular channels. For the assessment of the present LFD model for narrow rectangular channels, effective 284 data were selected. By using the present LFD model, these data are predicted with RMSE of 22.9% with the dryout criterion of zero-liquid film flow, but RMSE of 18.7% with rivulet formation model. This shows that the prediction error of the present LFD model for narrow rectangular channels is similar with that for circular channels.

  14. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  15. Homogeneous nonequilibrium critical flashing flow with a cavity flooding model

    International Nuclear Information System (INIS)

    Lee, S.Y.; Schrock, V.E.

    1989-01-01

    The primary purpose of the work presented here is to describe the model for pressure undershoot at incipient flashing in the critical flow of straight channels (Fanno-type flow) for subcooled or saturated stagnation conditions on a more physical basis. In previous models, a modification of the pressure undershoot prediction of Alamgir and Lienhard was used. Their method assumed nucleation occurs on the bounding walls as a result of molecular fluctuations. Without modification it overpredicts the pressure undershoot. In the present work the authors develop a mechanistic model for nucleation from wall cavities. This physical concept is more consistent with experimental data

  16. Simple Model for Identifying Critical Regions in Atrial Fibrillation

    Science.gov (United States)

    Christensen, Kim; Manani, Kishan A.; Peters, Nicholas S.

    2015-01-01

    Atrial fibrillation (AF) is the most common abnormal heart rhythm and the single biggest cause of stroke. Ablation, destroying regions of the atria, is applied largely empirically and can be curative but with a disappointing clinical success rate. We design a simple model of activation wave front propagation on an anisotropic structure mimicking the branching network of heart muscle cells. This integration of phenomenological dynamics and pertinent structure shows how AF emerges spontaneously when the transverse cell-to-cell coupling decreases, as occurs with age, beyond a threshold value. We identify critical regions responsible for the initiation and maintenance of AF, the ablation of which terminates AF. The simplicity of the model allows us to calculate analytically the risk of arrhythmia and express the threshold value of transversal cell-to-cell coupling as a function of the model parameters. This threshold value decreases with increasing refractory period by reducing the number of critical regions which can initiate and sustain microreentrant circuits. These biologically testable predictions might inform ablation therapies and arrhythmic risk assessment.

  17. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  18. Questioning the Faith - Models and Prediction in Stream Restoration (Invited)

    Science.gov (United States)

    Wilcock, P.

    2013-12-01

    River management and restoration demand prediction at and beyond our present ability. Management questions, framed appropriately, can motivate fundamental advances in science, although the connection between research and application is not always easy, useful, or robust. Why is that? This presentation considers the connection between models and management, a connection that requires critical and creative thought on both sides. Essential challenges for managers include clearly defining project objectives and accommodating uncertainty in any model prediction. Essential challenges for the research community include matching the appropriate model to project duration, space, funding, information, and social constraints and clearly presenting answers that are actually useful to managers. Better models do not lead to better management decisions or better designs if the predictions are not relevant to and accepted by managers. In fact, any prediction may be irrelevant if the need for prediction is not recognized. The predictive target must be developed in an active dialog between managers and modelers. This relationship, like any other, can take time to develop. For example, large segments of stream restoration practice have remained resistant to models and prediction because the foundational tenet - that channels built to a certain template will be able to transport the supplied sediment with the available flow - has no essential physical connection between cause and effect. Stream restoration practice can be steered in a predictive direction in which project objectives are defined as predictable attributes and testable hypotheses. If stream restoration design is defined in terms of the desired performance of the channel (static or dynamic, sediment surplus or deficit), then channel properties that provide these attributes can be predicted and a basis exists for testing approximations, models, and predictions.

  19. Database and prediction model for CANDU pressure tube diameter

    Energy Technology Data Exchange (ETDEWEB)

    Jung, J.Y.; Park, J.H. [Korea Atomic Energy Research Inst., Daejeon (Korea, Republic of)

    2014-07-01

    The pressure tube (PT) diameter is basic data in evaluating the CCP (critical channel power) of a CANDU reactor. Since the CCP affects the operational margin directly, an accurate prediction of the PT diameter is important to assess the operational margin. However, the PT diameter increases by creep owing to the effects of irradiation by neutron flux, stress, and reactor operating temperatures during the plant service period. Thus, it has been necessary to collect the measured data of the PT diameter and establish a database (DB) and develop a prediction model of PT diameter. Accordingly, in this study, a DB for the measured PT diameter data was established and a neural network (NN) based diameter prediction model was developed. The established DB included not only the measured diameter data but also operating conditions such as the temperature, pressure, flux, and effective full power date. The currently developed NN based diameter prediction model considers only extrinsic variables such as the operating conditions, and will be enhanced to consider the effect of intrinsic variables such as the micro-structure of the PT material. (author)

  20. Safety-critical Java on a time-predictable processor

    DEFF Research Database (Denmark)

    Korsholm, Stephan E.; Schoeberl, Martin; Puffitsch, Wolfgang

    2015-01-01

    For real-time systems the whole execution stack needs to be time-predictable and analyzable for the worst-case execution time (WCET). This paper presents a time-predictable platform for safety-critical Java. The platform consists of (1) the Patmos processor, which is a time-predictable processor......; (2) a C compiler for Patmos with support for WCET analysis; (3) the HVM, which is a Java-to-C compiler; (4) the HVM-SCJ implementation which supports SCJ Level 0, 1, and 2 (for both single and multicore platforms); and (5) a WCET analysis tool. We show that real-time Java programs translated to C...... and compiled to a Patmos binary can be analyzed by the AbsInt aiT WCET analysis tool. To the best of our knowledge the presented system is the second WCET analyzable real-time Java system; and the first one on top of a RISC processor....

  1. Predicting extinction rates in stochastic epidemic models

    International Nuclear Information System (INIS)

    Schwartz, Ira B; Billings, Lora; Dykman, Mark; Landsman, Alexandra

    2009-01-01

    We investigate the stochastic extinction processes in a class of epidemic models. Motivated by the process of natural disease extinction in epidemics, we examine the rate of extinction as a function of disease spread. We show that the effective entropic barrier for extinction in a susceptible–infected–susceptible epidemic model displays scaling with the distance to the bifurcation point, with an unusual critical exponent. We make a direct comparison between predictions and numerical simulations. We also consider the effect of non-Gaussian vaccine schedules, and show numerically how the extinction process may be enhanced when the vaccine schedules are Poisson distributed

  2. Setting the vision: applied patient-reported outcomes and smart, connected digital healthcare systems to improve patient-centered outcomes prediction in critical illness.

    Science.gov (United States)

    Wysham, Nicholas G; Abernethy, Amy P; Cox, Christopher E

    2014-10-01

    Prediction models in critical illness are generally limited to short-term mortality and uncommonly include patient-centered outcomes. Current outcome prediction tools are also insensitive to individual context or evolution in healthcare practice, potentially limiting their value over time. Improved prognostication of patient-centered outcomes in critical illness could enhance decision-making quality in the ICU. Patient-reported outcomes have emerged as precise methodological measures of patient-centered variables and have been successfully employed using diverse platforms and technologies, enhancing the value of research in critical illness survivorship and in direct patient care. The learning health system is an emerging ideal characterized by integration of multiple data sources into a smart and interconnected health information technology infrastructure with the goal of rapidly optimizing patient care. We propose a vision of a smart, interconnected learning health system with integrated electronic patient-reported outcomes to optimize patient-centered care, including critical care outcome prediction. A learning health system infrastructure integrating electronic patient-reported outcomes may aid in the management of critical illness-associated conditions and yield tools to improve prognostication of patient-centered outcomes in critical illness.

  3. Quality by control: Towards model predictive control of mammalian cell culture bioprocesses.

    Science.gov (United States)

    Sommeregger, Wolfgang; Sissolak, Bernhard; Kandra, Kulwant; von Stosch, Moritz; Mayer, Martin; Striedner, Gerald

    2017-07-01

    The industrial production of complex biopharmaceuticals using recombinant mammalian cell lines is still mainly built on a quality by testing approach, which is represented by fixed process conditions and extensive testing of the end-product. In 2004 the FDA launched the process analytical technology initiative, aiming to guide the industry towards advanced process monitoring and better understanding of how critical process parameters affect the critical quality attributes. Implementation of process analytical technology into the bio-production process enables moving from the quality by testing to a more flexible quality by design approach. The application of advanced sensor systems in combination with mathematical modelling techniques offers enhanced process understanding, allows on-line prediction of critical quality attributes and subsequently real-time product quality control. In this review opportunities and unsolved issues on the road to a successful quality by design and dynamic control implementation are discussed. A major focus is directed on the preconditions for the application of model predictive control for mammalian cell culture bioprocesses. Design of experiments providing information about the process dynamics upon parameter change, dynamic process models, on-line process state predictions and powerful software environments seem to be a prerequisite for quality by control realization. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Detailed physical properties prediction of pure methyl esters for biodiesel combustion modeling

    International Nuclear Information System (INIS)

    An, H.; Yang, W.M.; Maghbouli, A.; Chou, S.K.; Chua, K.J.

    2013-01-01

    Highlights: ► Group contribution methods from molecular level have been used for the prediction. ► Complete prediction of the physical properties for 5 methyl esters has been done. ► The predicted results can be very useful for biodiesel combustion modeling. ► Various models have been compared and the best model has been identified. ► Predicted properties are over large temperature ranges with excellent accuracies. -- Abstract: In order to accurately simulate the fuel spray, atomization, combustion and emission formation processes of a diesel engine fueled with biodiesel, adequate knowledge of biodiesel’s physical properties is desired. The objective of this work is to do a detailed physical properties prediction for the five major methyl esters of biodiesel for combustion modeling. The physical properties considered in this study are: normal boiling point, critical properties, vapor pressure, and latent heat of vaporization, liquid density, liquid viscosity, liquid thermal conductivity, gas diffusion coefficients and surface tension. For each physical property, the best prediction model has been identified, and very good agreements have been obtained between the predicted results and the published data where available. The calculated results can be used as key references for biodiesel combustion modeling.

  5. A critical review of predictive models for the onset of significant void in forced-convection subcooled boiling

    International Nuclear Information System (INIS)

    Dorra, H.; Lee, S.C.; Bankoff, S.G.

    1993-06-01

    This predictive models for the onset of significant void (OSV) in forced-convection subcooled boiling are reviewed and compared with extensive data. Three analytical models and seven empirical correlations are considered in this review. These models and correlations are put onto a common basis and are compared, again on a common basis, with a variety of data. The evaluation of their range of validity and applicability under various operating conditions are discussed. The results show that the correlations of Saha-Zuber seems to be the best model to predict OSV in vertical subcooled boiling flow

  6. A Case Study Using Modeling and Simulation to Predict Logistics Supply Chain Issues

    Science.gov (United States)

    Tucker, David A.

    2007-01-01

    Optimization of critical supply chains to deliver thousands of parts, materials, sub-assemblies, and vehicle structures as needed is vital to the success of the Constellation Program. Thorough analysis needs to be performed on the integrated supply chain processes to plan, source, make, deliver, and return critical items efficiently. Process modeling provides simulation technology-based, predictive solutions for supply chain problems which enable decision makers to reduce costs, accelerate cycle time and improve business performance. For example, United Space Alliance, LLC utilized this approach in late 2006 to build simulation models that recreated shuttle orbiter thruster failures and predicted the potential impact of thruster removals on logistics spare assets. The main objective was the early identification of possible problems in providing thruster spares for the remainder of the Shuttle Flight Manifest. After extensive analysis the model results were used to quantify potential problems and led to improvement actions in the supply chain. Similarly the proper modeling and analysis of Constellation parts, materials, operations, and information flows will help ensure the efficiency of the critical logistics supply chains and the overall success of the program.

  7. Critical percolation in the slow cooling of the bi-dimensional ferromagnetic Ising model

    Science.gov (United States)

    Ricateau, Hugo; Cugliandolo, Leticia F.; Picco, Marco

    2018-01-01

    We study, with numerical methods, the fractal properties of the domain walls found in slow quenches of the kinetic Ising model to its critical temperature. We show that the equilibrium interfaces in the disordered phase have critical percolation fractal dimension over a wide range of length scales. We confirm that the system falls out of equilibrium at a temperature that depends on the cooling rate as predicted by the Kibble-Zurek argument and we prove that the dynamic growing length once the cooling reaches the critical point satisfies the same scaling. We determine the dynamic scaling properties of the interface winding angle variance and we show that the crossover between critical Ising and critical percolation properties is determined by the growing length reached when the system fell out of equilibrium.

  8. Prediction of Oil Critical Rate in Vertical Wells using Meyer-Gardner ...

    African Journals Online (AJOL)

    PROF HORSFALL

    2018-04-14

    Apr 14, 2018 ... Department of Petroleum and Gas Engineering, Faculty of Engineering, Delta State University, Abraka, Delta State, ..... impermeable barrier, extending radially from the ... useful aid to field engineers for predicting critical rate.

  9. A model to predict stream water temperature across the conterminous USA

    Science.gov (United States)

    Catalina Segura; Peter Caldwell; Ge Sun; Steve McNulty; Yang Zhang

    2014-01-01

    Stream water temperature (ts) is a critical water quality parameter for aquatic ecosystems. However, ts records are sparse or nonexistent in many river systems. In this work, we present an empirical model to predict ts at the site scale across the USA. The model, derived using data from 171 reference sites selected from the Geospatial Attributes of Gages for Evaluating...

  10. ACCEPT: Introduction of the Adverse Condition and Critical Event Prediction Toolbox

    Science.gov (United States)

    Martin, Rodney A.; Santanu, Das; Janakiraman, Vijay Manikandan; Hosein, Stefan

    2015-01-01

    The prediction of anomalies or adverse events is a challenging task, and there are a variety of methods which can be used to address the problem. In this paper, we introduce a generic framework developed in MATLAB (sup registered mark) called ACCEPT (Adverse Condition and Critical Event Prediction Toolbox). ACCEPT is an architectural framework designed to compare and contrast the performance of a variety of machine learning and early warning algorithms, and tests the capability of these algorithms to robustly predict the onset of adverse events in any time-series data generating systems or processes.

  11. Critical evidence for the prediction error theory in associative learning.

    Science.gov (United States)

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  12. Critical thinking in clinical nurse education: application of Paul's model of critical thinking.

    Science.gov (United States)

    Andrea Sullivan, E

    2012-11-01

    Nurse educators recognize that many nursing students have difficulty in making decisions in clinical practice. The ability to make effective, informed decisions in clinical practice requires that nursing students know and apply the processes of critical thinking. Critical thinking is a skill that develops over time and requires the conscious application of this process. There are a number of models in the nursing literature to assist students in the critical thinking process; however, these models tend to focus solely on decision making in hospital settings and are often complex to actualize. In this paper, Paul's Model of Critical Thinking is examined for its application to nursing education. I will demonstrate how the model can be used by clinical nurse educators to assist students to develop critical thinking skills in all health care settings in a way that makes critical thinking skills accessible to students. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  14. Droplet size and velocity at the exit of a nozzle with two-component near critical and critical flow

    International Nuclear Information System (INIS)

    Lemonnier, H.; Camelo-Cavalcanti, E.S.

    1993-01-01

    Two-component critical flow modelling is an important issue for safety studies of various hazardous industrial activities. When the flow quality is high, the critical flow rate prediction is sensitive to the modelling of gas droplet mixture interfacial area. In order to improve the description of these flows, experiments were conducted with air-water flows in converging nozzles. The pressure was 2 and 4 bar and the gas mass quality ranged between 100% and 20%. The droplets size and velocity have been measured close to the outlet section of a nozzle with a 10 mm diameter throat. Subcritical and critical conditions were observed. These data are compared with the predictions of a critical flow model which includes an interfacial area model based on the classical ideas of Hinze and Kolmogorov. (authors). 9 figs., 12 refs

  15. A theoretical prediction of critical heat flux in saturated pool boiling during power transients

    International Nuclear Information System (INIS)

    Pasamehmetoglu, K.O.; Nelson, R.A.; Gunnerson, F.S.

    1987-01-01

    Understanding and predicting critical heat flux (CHF) behavior during steady-state and transient conditions is of fundamental interest in the design, operation, and safety of boiling and two-phase flow devices. Presented within this paper are the results of a comprehensive theoretical study specifically conducted to model transient CHF behavior in saturated pool boiling. Thermal energy conduction within a heating element and its influence on the CHF are also discussed. The resultant theory provides new insight into the basic physics of the CHF phenomenon and indicates favorable agreement with the experimental data from cylindrical heaters with small radii. However, the flat-ribbon heater data compared poorly with the present theory, although the general trend was predicted. Finally, various factors that affect the discrepency between the data and the theory are listed

  16. Formability prediction for AHSS materials using damage models

    Science.gov (United States)

    Amaral, R.; Santos, Abel D.; José, César de Sá; Miranda, Sara

    2017-05-01

    Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches.

  17. Formability prediction for AHSS materials using damage models

    International Nuclear Information System (INIS)

    Amaral, R.; Miranda, Sara; Santos, Abel D.; José, César de Sá

    2017-01-01

    Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches. (paper)

  18. New model for burnout prediction in channels of various cross-section

    Energy Technology Data Exchange (ETDEWEB)

    Bobkov, V.P.; Kozina, N.V.; Vinogrado, V.N.; Zyatnina, O.A. [Institute of Physics and Power Engineering, Kaluga (Russian Federation)

    1995-09-01

    The model developed to predict a critical heat flux (CHF) in various channels is presented together with the results of data analysis. A model is the realization of relative method of CHF describing based on the data for round tube and on the system of correction factors. The results of data description presented here are for rectangular and triangular channels, annuli and rod bundles.

  19. Predictive multiscale computational model of shoe-floor coefficient of friction.

    Science.gov (United States)

    Moghaddam, Seyed Reza M; Acharya, Arjun; Redfern, Mark S; Beschorner, Kurt E

    2018-01-03

    Understanding the frictional interactions between the shoe and floor during walking is critical to prevention of slips and falls, particularly when contaminants are present. A multiscale finite element model of shoe-floor-contaminant friction was developed that takes into account the surface and material characteristics of the shoe and flooring in microscopic and macroscopic scales. The model calculates shoe-floor coefficient of friction (COF) in boundary lubrication regime where effects of adhesion friction and hydrodynamic pressures are negligible. The validity of model outputs was assessed by comparing model predictions to the experimental results from mechanical COF testing. The multiscale model estimates were linearly related to the experimental results (p < 0.0001). The model predicted 73% of variability in experimentally-measured shoe-floor-contaminant COF. The results demonstrate the potential of multiscale finite element modeling in aiding slip-resistant shoe and flooring design and reducing slip and fall injuries. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  20. A Personalized Predictive Framework for Multivariate Clinical Time Series via Adaptive Model Selection.

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2017-11-01

    Building of an accurate predictive model of clinical time series for a patient is critical for understanding of the patient condition, its dynamics, and optimal patient management. Unfortunately, this process is not straightforward. First, patient-specific variations are typically large and population-based models derived or learned from many different patients are often unable to support accurate predictions for each individual patient. Moreover, time series observed for one patient at any point in time may be too short and insufficient to learn a high-quality patient-specific model just from the patient's own data. To address these problems we propose, develop and experiment with a new adaptive forecasting framework for building multivariate clinical time series models for a patient and for supporting patient-specific predictions. The framework relies on the adaptive model switching approach that at any point in time selects the most promising time series model out of the pool of many possible models, and consequently, combines advantages of the population, patient-specific and short-term individualized predictive models. We demonstrate that the adaptive model switching framework is very promising approach to support personalized time series prediction, and that it is able to outperform predictions based on pure population and patient-specific models, as well as, other patient-specific model adaptation strategies.

  1. Critical heat flux predictions for the Sandia Annular Core Research Reactor

    International Nuclear Information System (INIS)

    Rao, D.V.; El-Genk, M.S.

    1994-08-01

    This study provides best estimate predictions of the Critical Heat Flux (CHF) and the Critical Heat Flux Ratio (CHFR) to support the proposed upgrade of the Annual Core Research Reactor (ACRR) at Sandia National Laboratories (SNL) from its present value of 2 MWt to 4 MWt. These predictions are based on the University of New Mexico (UNM) - CHF correlation, originally developed for uniformly heated vertical annuli. The UNM-CHF correlation is applicable to low-flow and low-pressure conditions, which are typical of those in the ACRR. The three hypotheses that examined the effect of the nonuniform axial heat flux distribution in the ACRR core are (1) the local conditions hypotheses, (2) the total power hypothesis, and (3) the global conditions hypothesis. These hypotheses, in conjunction with the UNM-CHF correlation, are used to estimate the CHF and CHFR in the ACRR. Because the total power hypothesis predictions of power per rod at CHF are approximately 15%-20% lower than those corresponding to saturation exit conditions, it can be concluded that the total power hypothesis considerably underestimates the CHF for nonuniformly heated geometries. This conclusion is in agreement with previous experimental results. The global conditions hypothesis, which is more conservative and more accurate of the other two, provides the most reliable predictions of CHF/CHFR for the ACRR. The global conditions hypothesis predictions of CHFR varied between 2.1 and 3.9, with the higher value corresponding to the lower water inlet temperature of 20 degrees C

  2. Particle swarm optimization-based least squares support vector regression for critical heat flux prediction

    International Nuclear Information System (INIS)

    Jiang, B.T.; Zhao, F.Y.

    2013-01-01

    Highlights: ► CHF data are collected from the published literature. ► Less training data are used to train the LSSVR model. ► PSO is adopted to optimize the key parameters to improve the model precision. ► The reliability of LSSVR is proved through parametric trends analysis. - Abstract: In view of practical importance of critical heat flux (CHF) for design and safety of nuclear reactors, accurate prediction of CHF is of utmost significance. This paper presents a novel approach using least squares support vector regression (LSSVR) and particle swarm optimization (PSO) to predict CHF. Two available published datasets are used to train and test the proposed algorithm, in which PSO is employed to search for the best parameters involved in LSSVR model. The CHF values obtained by the LSSVR model are compared with the corresponding experimental values and those of a previous method, adaptive neuro fuzzy inference system (ANFIS). This comparison is also carried out in the investigation of parametric trends of CHF. It is found that the proposed method can achieve the desired performance and yields a more satisfactory fit with experimental results than ANFIS. Therefore, LSSVR method is likely to be suitable for other parameters processing such as CHF

  3. Predicting turns in proteins with a unified model.

    Directory of Open Access Journals (Sweden)

    Qi Song

    Full Text Available MOTIVATION: Turns are a critical element of the structure of a protein; turns play a crucial role in loops, folds, and interactions. Current prediction methods are well developed for the prediction of individual turn types, including α-turn, β-turn, and γ-turn, etc. However, for further protein structure and function prediction it is necessary to develop a uniform model that can accurately predict all types of turns simultaneously. RESULTS: In this study, we present a novel approach, TurnP, which offers the ability to investigate all the turns in a protein based on a unified model. The main characteristics of TurnP are: (i using newly exploited features of structural evolution information (secondary structure and shape string of protein based on structure homologies, (ii considering all types of turns in a unified model, and (iii practical capability of accurate prediction of all turns simultaneously for a query. TurnP utilizes predicted secondary structures and predicted shape strings, both of which have greater accuracy, based on innovative technologies which were both developed by our group. Then, sequence and structural evolution features, which are profile of sequence, profile of secondary structures and profile of shape strings are generated by sequence and structure alignment. When TurnP was validated on a non-redundant dataset (4,107 entries by five-fold cross-validation, we achieved an accuracy of 88.8% and a sensitivity of 71.8%, which exceeded the most state-of-the-art predictors of certain type of turn. Newly determined sequences, the EVA and CASP9 datasets were used as independent tests and the results we achieved were outstanding for turn predictions and confirmed the good performance of TurnP for practical applications.

  4. Higher spin currents in the critical O(N) vector model at 1/N2

    International Nuclear Information System (INIS)

    Manashov, A.N.; Strohmaier, M.

    2017-06-01

    We calculate the anomalous dimensions of higher spin singlet currents in the critical O(N) vector model at order 1/N 2 . The results are shown to be in agreement with the four-loop perturbative computation in φ 4 theory in 4-2ε dimensions. It is known that the order 1/N anomalous dimensions of higher-spin currents happen to be the same in the Gross-Neveu and the critical vector model. On the contrary, the order 1/N 2 corrections are different. The results can also be interpreted as a prediction for the two-loop computation in the dual higher-spin gravity.

  5. The Prediction of Drought-Related Tree Mortality in Vegetation Models

    Science.gov (United States)

    Schwinning, S.; Jensen, J.; Lomas, M. R.; Schwartz, B.; Woodward, F. I.

    2013-12-01

    Drought-related tree die-off events at regional scales have been reported from all wooded continents and it has been suggested that their frequency may be increasing. The prediction of these drought-related die-off events from regional to global scales has been recognized as a critical need for the conservation of forest resources and improving the prediction of climate-vegetation interactions. However, there is no conceptual consensus on how to best approach the quantitative prediction of tree mortality. Current models use a variety of mechanisms to represent demographic events. Mortality is modeled to represent a number of different processes, including death by fire, wind throw, extreme temperatures, and self-thinning, and each vegetation model differs in the emphasis they place on specific mechanisms. Dynamic global vegetation models generally operate on the assumption of incremental vegetation shift due to changes in the carbon economy of plant functional types and proportional effects on recruitment, growth, competition and mortality, but this may not capture sudden and sweeping tree death caused by extreme weather conditions. We tested several different approaches to predicting tree mortality within the framework of the Sheffield Dynamic Global Vegetation Model. We applied the model to the state of Texas, USA, which in 2011 experienced extreme drought conditions, causing the death of an estimated 300 million trees statewide. We then compared predicted to actual mortality to determine which algorithms most accurately predicted geographical variation in tree mortality. We discuss implications regarding the ongoing debate on the causes of tree death.

  6. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    Science.gov (United States)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-01-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  7. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    Science.gov (United States)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-05-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  8. Mathematical modeling in biology: A critical assessment

    Energy Technology Data Exchange (ETDEWEB)

    Buiatti, M. [Florence, Univ. (Italy). Dipt. di Biologia Animale e Genetica

    1998-01-01

    The molecular revolution and the development of biology-derived industry have led in the last fifty years to an unprecedented `lead forward` of life sciences in terms of experimental data. Less success has been achieved in the organisation of such data and in the consequent development of adequate explanatory and predictive theories and models. After a brief historical excursus inborn difficulties of mathematisation of biological objects and processes derived from the complex dynamics of life are discussed along with the logical tools (simplifications, choice of observation points etc.) used to overcome them. `Autistic`, monodisciplinary attitudes towards biological modeling of mathematicians, physicists, biologists aimed in each case at the use of the tools of other disciplines to solve `selfish` problems are also taken into account and a warning against derived dangers (reification of mono disciplinary metaphors, lack of falsification etc.) is given. Finally `top.down` (deductive) and `bottom up` (inductive) heuristic interactive approaches to mathematisation are critically discussed with the help of serie of examples.

  9. Mathematical modeling in biology: A critical assessment

    International Nuclear Information System (INIS)

    Buiatti, M.

    1998-01-01

    The molecular revolution and the development of biology-derived industry have led in the last fifty years to an unprecedented 'lead forward' of life sciences in terms of experimental data. Less success has been achieved in the organisation of such data and in the consequent development of adequate explanatory and predictive theories and models. After a brief historical excursus inborn difficulties of mathematisation of biological objects and processes derived from the complex dynamics of life are discussed along with the logical tools (simplifications, choice of observation points etc.) used to overcome them. 'Autistic', monodisciplinary attitudes towards biological modeling of mathematicians, physicists, biologists aimed in each case at the use of the tools of other disciplines to solve 'selfish' problems are also taken into account and a warning against derived dangers (reification of mono disciplinary metaphors, lack of falsification etc.) is given. Finally 'top.down' (deductive) and 'bottom up' (inductive) heuristic interactive approaches to mathematisation are critically discussed with the help of serie of examples

  10. A prediction model of short-term ionospheric foF2 Based on AdaBoost

    Science.gov (United States)

    Zhao, Xiukuan; Liu, Libo; Ning, Baiqi

    Accurate specifications of spatial and temporal variations of the ionosphere during geomagnetic quiet and disturbed conditions are critical for applications, such as HF communications, satellite positioning and navigation, power grids, pipelines, etc. Therefore, developing empirical models to forecast the ionospheric perturbations is of high priority in real applications. The critical frequency of the F2 layer, foF2, is an important ionospheric parameter, especially for radio wave propagation applications. In this paper, the AdaBoost-BP algorithm is used to construct a new model to predict the critical frequency of the ionospheric F2-layer one hour ahead. Different indices were used to characterize ionospheric diurnal and seasonal variations and their dependence on solar and geomagnetic activity. These indices, together with the current observed foF2 value, were input into the prediction model and the foF2 value at one hour ahead was output. We analyzed twenty-two years’ foF2 data from nine ionosonde stations in the East-Asian sector in this work. The first eleven years’ data were used as a training dataset and the second eleven years’ data were used as a testing dataset. The results show that the performance of AdaBoost-BP is better than those of BP Neural Network (BPNN), Support Vector Regression (SVR) and the IRI model. For example, the AdaBoost-BP prediction absolute error of foF2 at Irkutsk station (a middle latitude station) is 0.32 MHz, which is better than 0.34 MHz from BPNN, 0.35 MHz from SVR and also significantly outperforms the IRI model whose absolute error is 0.64 MHz. Meanwhile, AdaBoost-BP prediction absolute error at Taipei station from the low latitude is 0.78 MHz, which is better than 0.81 MHz from BPNN, 0.81 MHz from SVR and 1.37 MHz from the IRI model. Finally, the variety characteristics of the AdaBoost-BP prediction error along with seasonal variation, solar activity and latitude variation were also discussed in the paper.

  11. Acute Kidney Injury in Trauma Patients Admitted to Critical Care: Development and Validation of a Diagnostic Prediction Model.

    Science.gov (United States)

    Haines, Ryan W; Lin, Shih-Pin; Hewson, Russell; Kirwan, Christopher J; Torrance, Hew D; O'Dwyer, Michael J; West, Anita; Brohi, Karim; Pearse, Rupert M; Zolfaghari, Parjam; Prowle, John R

    2018-02-26

    Acute Kidney Injury (AKI) complicating major trauma is associated with increased mortality and morbidity. Traumatic AKI has specific risk factors and predictable time-course facilitating diagnostic modelling. In a single centre, retrospective observational study we developed risk prediction models for AKI after trauma based on data around intensive care admission. Models predicting AKI were developed using data from 830 patients, using data reduction followed by logistic regression, and were independently validated in a further 564 patients. AKI occurred in 163/830 (19.6%) with 42 (5.1%) receiving renal replacement therapy (RRT). First serum creatinine and phosphate, units of blood transfused in first 24 h, age and Charlson score discriminated need for RRT and AKI early after trauma. For RRT c-statistics were good to excellent: development: 0.92 (0.88-0.96), validation: 0.91 (0.86-0.97). Modelling AKI stage 2-3, c-statistics were also good, development: 0.81 (0.75-0.88) and validation: 0.83 (0.74-0.92). The model predicting AKI stage 1-3 performed moderately, development: c-statistic 0.77 (0.72-0.81), validation: 0.70 (0.64-0.77). Despite good discrimination of need for RRT, positive predictive values (PPV) at the optimal cut-off were only 23.0% (13.7-42.7) in development. However, PPV for the alternative endpoint of RRT and/or death improved to 41.2% (34.8-48.1) highlighting death as a clinically relevant endpoint to RRT.

  12. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  13. Improving Agent Based Modeling of Critical Incidents

    Directory of Open Access Journals (Sweden)

    Robert Till

    2010-04-01

    Full Text Available Agent Based Modeling (ABM is a powerful method that has been used to simulate potential critical incidents in the infrastructure and built environments. This paper will discuss the modeling of some critical incidents currently simulated using ABM and how they may be expanded and improved by using better physiological modeling, psychological modeling, modeling the actions of interveners, introducing Geographic Information Systems (GIS and open source models.

  14. Prediction of Chemical Function: Model Development and ...

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  15. Comparison of the CATHENA model of Gentilly-2 end shield cooling system predictions to station data

    Energy Technology Data Exchange (ETDEWEB)

    Zagre, G.; Sabourin, G. [Candu Energy Inc., Montreal, Quebec (Canada); Chapados, S. [Hydro-Quebec, Montreal, Quebec (Canada)

    2012-07-01

    As part of the Gentilly-2 Refurbishment Project, Hydro-Quebec has elected to perform the End Shield Cooling Safety Analysis. A CATHENA model of Gentilly-2 End Shield Cooling System was developed for this purpose. This model includes new elements compared to other CANDU6 End Shield Cooling models such as a detailed heat exchanger and control logic model. In order to test the model robustness and accuracy, the model predictions were compared with plant measurements.This paper summarizes this comparison between the model predictions and the station measurements. It is shown that the CATHENA model is flexible and accurate enough to predict station measurements for critical parameters, and the detailed heat exchanger model allows reproducing station transients. (author)

  16. Degradation Prediction Model Based on a Neural Network with Dynamic Windows

    Science.gov (United States)

    Zhang, Xinghui; Xiao, Lei; Kang, Jianshe

    2015-01-01

    Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873

  17. Modeling and prediction of flotation performance using support vector regression

    Directory of Open Access Journals (Sweden)

    Despotović Vladimir

    2017-01-01

    Full Text Available Continuous efforts have been made in recent year to improve the process of paper recycling, as it is of critical importance for saving the wood, water and energy resources. Flotation deinking is considered to be one of the key methods for separation of ink particles from the cellulose fibres. Attempts to model the flotation deinking process have often resulted in complex models that are difficult to implement and use. In this paper a model for prediction of flotation performance based on Support Vector Regression (SVR, is presented. Representative data samples were created in laboratory, under a variety of practical control variables for the flotation deinking process, including different reagents, pH values and flotation residence time. Predictive model was created that was trained on these data samples, and the flotation performance was assessed showing that Support Vector Regression is a promising method even when dataset used for training the model is limited.

  18. Evaluation of cloud prediction and determination of critical relative humidity for a mesoscale numerical weather prediction model

    Energy Technology Data Exchange (ETDEWEB)

    Seaman, N.L.; Guo, Z.; Ackerman, T.P. [Pennsylvania State Univ., University Park, PA (United States)

    1996-04-01

    Predictions of cloud occurrence and vertical location from the Pennsylvannia State University/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) were evaluated statistically using cloud observations obtained at Coffeyville, Kansas, as part of the Second International satellite Cloud Climatology Project Regional Experiment campaign. Seventeen cases were selected for simulation during a November-December 1991 field study. MM5 was used to produce two sets of 36-km simulations, one with and one without four-dimensional data assimilation (FDDA), and a set of 12-km simulations without FDDA, but nested within the 36-km FDDA runs.

  19. A multidimensional stability model for predicting shallow landslide size and shape across landscapes.

    Science.gov (United States)

    Milledge, David G; Bellugi, Dino; McKean, Jim A; Densmore, Alexander L; Dietrich, William E

    2014-11-01

    The size of a shallow landslide is a fundamental control on both its hazard and geomorphic importance. Existing models are either unable to predict landslide size or are computationally intensive such that they cannot practically be applied across landscapes. We derive a model appropriate for natural slopes that is capable of predicting shallow landslide size but simple enough to be applied over entire watersheds. It accounts for lateral resistance by representing the forces acting on each margin of potential landslides using earth pressure theory and by representing root reinforcement as an exponential function of soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are well constrained by field data. The model predicts failure for the observed scar geometry and finds that larger or smaller conformal shapes are more stable. Numerical experiments demonstrate that friction on the boundaries of a potential landslide increases considerably the magnitude of lateral reinforcement, relative to that due to root cohesion alone. We find that there is a critical depth in both cohesive and cohesionless soils, resulting in a minimum size for failure, which is consistent with observed size-frequency distributions. Furthermore, the differential resistance on the boundaries of a potential landslide is responsible for a critical landslide shape which is longer than it is wide, consistent with observed aspect ratios. Finally, our results show that minimum size increases as approximately the square of failure surface depth, consistent with observed landslide depth-area data.

  20. A review of logistic regression models used to predict post-fire tree mortality of western North American conifers

    Science.gov (United States)

    Travis Woolley; David C. Shaw; Lisa M. Ganio; Stephen. Fitzgerald

    2012-01-01

    Logistic regression models used to predict tree mortality are critical to post-fire management, planning prescribed bums and understanding disturbance ecology. We review literature concerning post-fire mortality prediction using logistic regression models for coniferous tree species in the western USA. We include synthesis and review of: methods to develop, evaluate...

  1. Modeling of the Critical Micelle Concentration (CMC) of Nonionic Surfactants with an Extended Group-Contribution Method

    DEFF Research Database (Denmark)

    Mattei, Michele; Kontogeorgis, Georgios; Gani, Rafiqul

    2013-01-01

    , those compounds that exhibit larger correlation errors (based only on first- and second-order groups) are assigned to more detailed molecular descriptions, so that better correlations of critical micelle concentrations are obtained. The group parameter estimation has been performed using a data set......A group-contribution (GC) property prediction model for estimating the critical micelle concentration (CMC) of nonionic surfactants in water at 25 °C is presented. The model is based on the Marrero and Gani GC method. A systematic analysis of the model performance against experimental data...... concentration, and in particular, the quantitative structure−property relationship models, the developed GC model provides an accurate correlation and allows for an easier and faster application in computer-aided molecular design techniques facilitating chemical process and product design....

  2. The role of intergenerational similarity and parenting in adolescent self-criticism: An actor-partner interdependence model.

    Science.gov (United States)

    Bleys, Dries; Soenens, Bart; Boone, Liesbet; Claes, Stephan; Vliegen, Nicole; Luyten, Patrick

    2016-06-01

    Research investigating the development of adolescent self-criticism has typically focused on the role of either parental self-criticism or parenting. This study used an actor-partner interdependence model to examine an integrated theoretical model in which achievement-oriented psychological control has an intervening role in the relation between parental and adolescent self-criticism. Additionally, the relative contribution of both parents and the moderating role of adolescent gender were examined. Participants were 284 adolescents (M = 14 years, range = 12-16 years) and their parents (M = 46 years, range = 32-63 years). Results showed that only maternal self-criticism was directly related to adolescent self-criticism. However, both parents' achievement-oriented psychological control had an intervening role in the relation between parent and adolescent self-criticism in both boys and girls. Moreover, one parent's achievement-oriented psychological control was not predicted by the self-criticism of the other parent. Copyright © 2016 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  3. Predicting soil acidification trends at Plynlimon using the SAFE model

    Directory of Open Access Journals (Sweden)

    B. Reynolds

    1997-01-01

    Full Text Available The SAFE model has been applied to an acid grassland site, located on base-poor stagnopodzol soils derived from Lower Palaeozoic greywackes. The model predicts that acidification of the soil has occurred in response to increased acid deposition following the industrial revolution. Limited recovery is predicted following the decline in sulphur deposition during the mid to late 1970s. Reducing excess sulphur and NOx deposition in 1998 to 40% and 70% of 1980 levels results in further recovery but soil chemical conditions (base saturation, soil water pH and ANC do not return to values predicted in pre-industrial times. The SAFE model predicts that critical loads (expressed in terms of the (Ca+Mg+K:Alcrit ratio for six vegetation species found in acid grassland communities are not exceeded despite the increase in deposited acidity following the industrial revolution. The relative growth response of selected vegetation species characteristic of acid grassland swards has been predicted using a damage function linking growth to soil solution base cation to aluminium ratio. The results show that very small growth reductions can be expected for 'acid tolerant' plants growing in acid upland soils. For more sensitive species such as Holcus lanatus, SAFE predicts that growth would have been reduced by about 20% between 1951 and 1983, when acid inputs were greatest. Recovery to c. 90% of normal growth (under laboratory conditions is predicted as acidic inputs decline.

  4. Heat Transfer Characteristics and Prediction Model of Supercritical Carbon Dioxide (SC-CO2 in a Vertical Tube

    Directory of Open Access Journals (Sweden)

    Can Cai

    2017-11-01

    Full Text Available Due to its distinct capability to improve the efficiency of shale gas production, supercritical carbon dioxide (SC-CO2 fracturing has attracted increased attention in recent years. Heat transfer occurs in the transportation and fracture processes. To better predict and understand the heat transfer of SC-CO2 near the critical region, numerical simulations focusing on a vertical flow pipe were performed. Various turbulence models and turbulent Prandtl numbers (Prt were evaluated to capture the heat transfer deterioration (HTD. The simulations show that the turbulent Prandtl number model (TWL model combined with the Shear Stress Transport (SST k-ω turbulence model accurately predicts the HTD in the critical region. It was found that Prt has a strong effect on the heat transfer prediction. The HTD occurred under larger heat flux density conditions, and an acceleration process was observed. Gravity also affects the HTD through the linkage of buoyancy, and HTD did not occur under zero-gravity conditions.

  5. Investigating Predictive Role of Critical Thinking on Metacognition with Structural Equation Modeling

    Science.gov (United States)

    Arslan, Serhat

    2015-01-01

    The purpose of this study is to examine the relationships between critical thinking and metacognition. The sample of study consists of 390 university students who were enrolled in different programs at Sakarya University, in Turkey. In this study, the Critical Thinking Disposition Scale and Metacognitive Thinking Scale were used. The relationships…

  6. Acute Pancreatitis as a Model to Predict Transition of Systemic Inflammation to Organ Failure in Trauma and Critical Illness

    Science.gov (United States)

    2017-10-01

    models ); • clinical interventions; • new business creation; and • other. Nothing to report. Nothing to report. Nothing to report. 17...AWARD NUMBER: W81XWH-14-1-0376 TITLE: Acute Pancreatitis as a Model to Predict Transition of Systemic Inflammation to Organ Failgure in Trauma...COVERED 22 Sep 2016 - 21 Sep 2017 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Acute Pancreatitis as a Model to Predict Transition of Systemic

  7. Higher spin currents in the critical O(N) vector model at 1/N{sup 2}

    Energy Technology Data Exchange (ETDEWEB)

    Manashov, A.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Regensburg Univ. (Germany). Inst. fuer Theoretische Physik; Skvortsov, E.D. [Munich Univ. (Germany). Arnold Sommerfeld Center for Theoretical Physics; Lebedev Institute of Physics, Moscow (Russian Federation); Strohmaier, M. [Regensburg Univ. (Germany). Inst. fuer Theoretische Physik

    2017-06-15

    We calculate the anomalous dimensions of higher spin singlet currents in the critical O(N) vector model at order 1/N{sup 2}. The results are shown to be in agreement with the four-loop perturbative computation in φ{sup 4} theory in 4-2ε dimensions. It is known that the order 1/N anomalous dimensions of higher-spin currents happen to be the same in the Gross-Neveu and the critical vector model. On the contrary, the order 1/N{sup 2} corrections are different. The results can also be interpreted as a prediction for the two-loop computation in the dual higher-spin gravity.

  8. A critical review of lexical analysis and Big Five model

    Directory of Open Access Journals (Sweden)

    María Cristina Richaud de Minzi

    2002-06-01

    Full Text Available In the last years the idea has resurfaced that traits can be measured in a reliable and valid and this can be useful inthe prediction of human behavior. The five-factor model appears to represent a conceptual and empirical advances in the field of personality theory. Necessary orthogonal factors (Goldberg, 1992, p. 26 to show the relationships between the descriptors of the features in English is five, and its nature can be summarized through the broad concepts of Surgency, Agreeableness, Responsibility, Emotional Stability versus neuroticism and openness to experience (John, 1990, p96 Furthermore, despite the criticisms that have been given to the model, represents a breakthrough in the field of personality assessment. This approach means a contribution to the study of personality, without being the integrative model of personality.

  9. Role of Personality Traits, Learning Styles and Metacognition in Predicting Critical Thinking of Undergraduate Students

    Directory of Open Access Journals (Sweden)

    Soliemanifar O

    2015-04-01

    The aim of this study was to investigate the role of personality traits, learning styles and metacognition in predicting critical thinking. Instrument & Methods: In this descriptive correlative study, 240 students (130 girls and 110 boys of Ahvaz Shahid Chamran University were selected by multi-stage random sampling method. The instruments for collecting data were NEO Five-Factor Inventory, learning style inventory of Kolb (LSI, metacognitive assessment inventory (MAI of Schraw & Dennison (1994 and California Critical Thinking Skills Test (CCTST. The data were analyzed using Pearson correlation coefficient, stepwise regression analysis and Canonical correlation analysis.  Findings: Openness to experiment (b=0.41, conscientiousness (b=0.28, abstract conceptualization (b=0.39, active experimentation (b=0.22, reflective observation (b=0.12, knowledge of cognition (b=0.47 and regulation of cognition (b=0.29 were effective in predicting critical thinking. Openness to experiment and conscientiousness (r2=0.25, active experimentation, abstract conceptualization and reflective observation learning styles (r2=0.21 and knowledge and regulation of cognition metacognitions (r2=0.3 had an important role in explaining critical thinking. The linear combination of critical thinking skills (evaluation, analysis, inference was predictable by a linear combination of dispositional-cognitive factors (openness, conscientiousness, abstract conceptualization, active experimentation, knowledge of cognition and regulation of cognition. Conclusion: Personality traits, learning styles and metacognition, as dispositional-cognitive factors, play a significant role in students' critical thinking.

  10. A theoretical prediction of critical heat flux in subcooled pool boiling during power transients

    International Nuclear Information System (INIS)

    Pasamehmetoglu, K.O.; Nelson, R.A.; Gunnerson, F.S.

    1988-01-01

    Understanding and predicting critical heat flux (CHF) behavior during steady-state and transient conditions are of fundamenatal interest in the design, operation, safety of boiling and two-phase flow devices. This paper discusses the results of a comprehensive theoretical study made specifically to model transient CHF behavior in subcooled pool boiling. This study is based upon a simplified steady-state CHF model in terms of the vapor mass growth period. The results obtained from this theory indicate favorable agreement with the experimental data from cylindrical heaters with small radii. The statistical nature of the vapor mass behavior in transient boiling also is considered and upper and lower limits for the current theory are established. Various factors that affect the discrepancy between the data and the theory are discussed

  11. Lexical prediction via forward models: N400 evidence from German Sign Language.

    Science.gov (United States)

    Hosemann, Jana; Herrmann, Annika; Steinbach, Markus; Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias

    2013-09-01

    Models of language processing in the human brain often emphasize the prediction of upcoming input-for example in order to explain the rapidity of language understanding. However, the precise mechanisms of prediction are still poorly understood. Forward models, which draw upon the language production system to set up expectations during comprehension, provide a promising approach in this regard. Here, we present an event-related potential (ERP) study on German Sign Language (DGS) which tested the hypotheses of a forward model perspective on prediction. Sign languages involve relatively long transition phases between one sign and the next, which should be anticipated as part of a forward model-based prediction even though they are semantically empty. Native speakers of DGS watched videos of naturally signed DGS sentences which either ended with an expected or a (semantically) unexpected sign. Unexpected signs engendered a biphasic N400-late positivity pattern. Crucially, N400 onset preceded critical sign onset and was thus clearly elicited by properties of the transition phase. The comprehension system thereby clearly anticipated modality-specific information about the realization of the predicted semantic item. These results provide strong converging support for the application of forward models in language comprehension. © 2013 Elsevier Ltd. All rights reserved.

  12. Predicting the local impacts of energy development: a critical guide to forecasting methods and models

    Energy Technology Data Exchange (ETDEWEB)

    Sanderson, D.; O' Hare, M.

    1977-05-01

    Models forecasting second-order impacts from energy development vary in their methodology, output, assumptions, and quality. As a rough dichotomy, they either simulate community development over time or combine various submodels providing community snapshots at selected points in time. Using one or more methods - input/output models, gravity models, econometric models, cohort-survival models, or coefficient models - they estimate energy-development-stimulated employment, population, public and private service needs, and government revenues and expenditures at some future time (ranging from annual to average year predictions) and for different governmental jurisdictions (municipal, county, state, etc.). Underlying assumptions often conflict, reflecting their different sources - historical data, comparative data, surveys, and judgments about future conditions. Model quality, measured by special features, tests, exportability and usefulness to policy-makers, reveals careful and thorough work in some cases and hurried operations with insufficient in-depth analysis in others.

  13. Critical behavior of the contact process on small-world networks

    Science.gov (United States)

    Ferreira, Ronan S.; Ferreira, Silvio C.

    2013-11-01

    We investigate the role of clustering on the critical behavior of the contact process (CP) on small-world networks using the Watts-Strogatz (WS) network model with an edge rewiring probability p. The critical point is well predicted by a homogeneous cluster-approximation for the limit of vanishing clustering ( p → 1). The critical exponents and dimensionless moment ratios of the CP are in agreement with those predicted by the mean-field theory for any p > 0. This independence on the network clustering shows that the small-world property is a sufficient condition for the mean-field theory to correctly predict the universality of the model. Moreover, we compare the CP dynamics on WS networks with rewiring probability p = 1 and random regular networks and show that the weak heterogeneity of the WS network slightly changes the critical point but does not alter other critical quantities of the model.

  14. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).

    Science.gov (United States)

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  15. On the criticality of inferred models

    Science.gov (United States)

    Mastromatteo, Iacopo; Marsili, Matteo

    2011-10-01

    Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality.

  16. On the criticality of inferred models

    International Nuclear Information System (INIS)

    Mastromatteo, Iacopo; Marsili, Matteo

    2011-01-01

    Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality

  17. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  18. Review and assessment of models for predicting the migration of radionuclides through rivers

    International Nuclear Information System (INIS)

    Monte, Luigi; Boyer, Patrick; Brittain, John E.; Haakanson, Lars; Lepicard, Samuel; Smith, Jim T.

    2005-01-01

    The present paper summarises the results of the review and assessment of state-of-the-art models developed for predicting the migration of radionuclides through rivers. The different approaches of the models to predict the behaviour of radionuclides in lotic ecosystems are presented and compared. The models were classified and evaluated according to their main methodological approaches. The results of an exercise of model application to specific contamination scenarios aimed at assessing and comparing the model performances were described. A critical evaluation and analysis of the uncertainty of the models was carried out. The main factors influencing the inherent uncertainty of the models, such as the incompleteness of the actual knowledge and the intrinsic environmental and biological variability of the processes controlling the behaviour of radionuclides in rivers, are analysed

  19. Two critical tests for the Critical Point earthquake

    Science.gov (United States)

    Tzanis, A.; Vallianatos, F.

    2003-04-01

    It has been credibly argued that the earthquake generation process is a critical phenomenon culminating with a large event that corresponds to some critical point. In this view, a great earthquake represents the end of a cycle on its associated fault network and the beginning of a new one. The dynamic organization of the fault network evolves as the cycle progresses and a great earthquake becomes more probable, thereby rendering possible the prediction of the cycle’s end by monitoring the approach of the fault network toward a critical state. This process may be described by a power-law time-to-failure scaling of the cumulative seismic release rate. Observational evidence has confirmed the power-law scaling in many cases and has empirically determined that the critical exponent in the power law is typically of the order n=0.3. There are also two theoretical predictions for the value of the critical exponent. Ben-Zion and Lyakhovsky (Pure appl. geophys., 159, 2385-2412, 2002) give n=1/3. Rundle et al. (Pure appl. geophys., 157, 2165-2182, 2000) show that the power-law activation associated with a spinodal instability is essentially identical to the power-law acceleration of Benioff strain observed prior to earthquakes; in this case n=0.25. More recently, the CP model has gained support from the development of more dependable models of regional seismicity with realistic fault geometry that show accelerating seismicity before large events. Essentially, these models involve stress transfer to the fault network during the cycle such, that the region of accelerating seismicity will scale with the size of the culminating event, as for instance in Bowman and King (Geophys. Res. Let., 38, 4039-4042, 2001). It is thus possible to understand the observed characteristics of distributed accelerating seismicity in terms of a simple process of increasing tectonic stress in a region already subjected to stress inhomogeneities at all scale lengths. Then, the region of

  20. The IntFOLD server: an integrated web resource for protein fold recognition, 3D model quality assessment, intrinsic disorder prediction, domain prediction and ligand binding site prediction.

    Science.gov (United States)

    Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J

    2011-07-01

    The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.

  1. A Network-Based Approach to Modeling and Predicting Product Coconsideration Relations

    Directory of Open Access Journals (Sweden)

    Zhenghui Sha

    2018-01-01

    Full Text Available Understanding customer preferences in consideration decisions is critical to choice modeling in engineering design. While existing literature has shown that the exogenous effects (e.g., product and customer attributes are deciding factors in customers’ consideration decisions, it is not clear how the endogenous effects (e.g., the intercompetition among products would influence such decisions. This paper presents a network-based approach based on Exponential Random Graph Models to study customers’ consideration behaviors according to engineering design. Our proposed approach is capable of modeling the endogenous effects among products through various network structures (e.g., stars and triangles besides the exogenous effects and predicting whether two products would be conisdered together. To assess the proposed model, we compare it against the dyadic network model that only considers exogenous effects. Using buyer survey data from the China automarket in 2013 and 2014, we evaluate the goodness of fit and the predictive power of the two models. The results show that our model has a better fit and predictive accuracy than the dyadic network model. This underscores the importance of the endogenous effects on customers’ consideration decisions. The insights gained from this research help explain how endogenous effects interact with exogeous effects in affecting customers’ decision-making.

  2. Improving the Accuracy of a Heliocentric Potential (HCP Prediction Model for the Aviation Radiation Dose

    Directory of Open Access Journals (Sweden)

    Junga Hwang

    2016-12-01

    Full Text Available The space radiation dose over air routes including polar routes should be carefully considered, especially when space weather shows sudden disturbances such as coronal mass ejections (CMEs, flares, and accompanying solar energetic particle events. We recently established a heliocentric potential (HCP prediction model for real-time operation of the CARI-6 and CARI-6M programs. Specifically, the HCP value is used as a critical input value in the CARI-6/6M programs, which estimate the aviation route dose based on the effective dose rate. The CARI-6/6M approach is the most widely used technique, and the programs can be obtained from the U.S. Federal Aviation Administration (FAA. However, HCP values are given at a one month delay on the FAA official webpage, which makes it difficult to obtain real-time information on the aviation route dose. In order to overcome this critical limitation regarding the time delay for space weather customers, we developed a HCP prediction model based on sunspot number variations (Hwang et al. 2015. In this paper, we focus on improvements to our HCP prediction model and update it with neutron monitoring data. We found that the most accurate method to derive the HCP value involves (1 real-time daily sunspot assessments, (2 predictions of the daily HCP by our prediction algorithm, and (3 calculations of the resultant daily effective dose rate. Additionally, we also derived the HCP prediction algorithm in this paper by using ground neutron counts. With the compensation stemming from the use of ground neutron count data, the newly developed HCP prediction model was improved.

  3. Winnerless competition principle and prediction of the transient dynamics in a Lotka-Volterra model

    Science.gov (United States)

    Afraimovich, Valentin; Tristan, Irma; Huerta, Ramon; Rabinovich, Mikhail I.

    2008-12-01

    Predicting the evolution of multispecies ecological systems is an intriguing problem. A sufficiently complex model with the necessary predicting power requires solutions that are structurally stable. Small variations of the system parameters should not qualitatively perturb its solutions. When one is interested in just asymptotic results of evolution (as time goes to infinity), then the problem has a straightforward mathematical image involving simple attractors (fixed points or limit cycles) of a dynamical system. However, for an accurate prediction of evolution, the analysis of transient solutions is critical. In this paper, in the framework of the traditional Lotka-Volterra model (generalized in some sense), we show that the transient solution representing multispecies sequential competition can be reproducible and predictable with high probability.

  4. Predictive Control, Competitive Model Business Planning, and Innovation ERP

    DEFF Research Database (Denmark)

    Nourani, Cyrus F.; Lauth, Codrina

    2015-01-01

    is not viewed as the sum of its component elements, but the product of their interactions. The paper starts with introducing a systems approach to business modeling. A competitive business modeling technique, based on the author's planning techniques is applied. Systemic decisions are based on common......New optimality principles are put forth based on competitive model business planning. A Generalized MinMax local optimum dynamic programming algorithm is presented and applied to business model computing where predictive techniques can determine local optima. Based on a systems model an enterprise...... organizational goals, and as such business planning and resource assignments should strive to satisfy higher organizational goals. It is critical to understand how different decisions affect and influence one another. Here, a business planning example is presented where systems thinking technique, using Causal...

  5. Critical heat flux prediction by using radial basis function and multilayer perceptron neural networks: A comparison study

    International Nuclear Information System (INIS)

    Vaziri, Nima; Hojabri, Alireza; Erfani, Ali; Monsefi, Mehrdad; Nilforooshan, Behnam

    2007-01-01

    Critical heat flux (CHF) is an important parameter for the design of nuclear reactors. Although many experimental and theoretical researches have been performed, there is not a single correlation to predict CHF because it is influenced by many parameters. These parameters are based on fixed inlet, local and fixed outlet conditions. Artificial neural networks (ANNs) have been applied to a wide variety of different areas such as prediction, approximation, modeling and classification. In this study, two types of neural networks, radial basis function (RBF) and multilayer perceptron (MLP), are trained with the experimental CHF data and their performances are compared. RBF predicts CHF with root mean square (RMS) errors of 0.24%, 7.9%, 0.16% and MLP predicts CHF with RMS errors of 1.29%, 8.31% and 2.71%, in fixed inlet conditions, local conditions and fixed outlet conditions, respectively. The results show that neural networks with RBF structure have superior performance in CHF data prediction over MLP neural networks. The parametric trends of CHF obtained by the trained ANNs are also evaluated and results reported

  6. Assessment of correlations and models for prediction of CHF in subcooled flow boiling

    International Nuclear Information System (INIS)

    Celata, G.P.; Mariani, A.; Cumo, M.

    1992-01-01

    This paper provides an analysis of available correlations and models for the prediction of Critical Heat Flux (CHF) in subcooled flow boiling in the ranges of interest of fusion reactor thermal-hydraulic conditions, i.e., high inlet liquid subcooling and velocity and small channel diameter and length. The aim of the study was to establish the limits of validity of present predictive tools (most of them were proposed with reference to LWR thermal-hydraulic studies) in the above conditions. The reference data-set represents most of available data covering wide ranges of operating conditions in the framework of present interest (0.1 s ub, in < 230 K). Among the tens of predictive tools available in literature, four correlations (Levy, Westinghouse, modified-Tong and Tong-75) and three models (Weisman and Ileslamlou Lee and Mudawar and Katto) were selected. The modified-Tong correlation and the Katto model seem to be reliable predictive tools for the calculation of the CHF in subcooled flow boiling

  7. Risk assessment and remedial policy evaluation using predictive modeling

    International Nuclear Information System (INIS)

    Linkov, L.; Schell, W.R.

    1996-01-01

    As a result of nuclear industry operation and accidents, large areas of natural ecosystems have been contaminated by radionuclides and toxic metals. Extensive societal pressure has been exerted to decrease the radiation dose to the population and to the environment. Thus, in making abatement and remediation policy decisions, not only economic costs but also human and environmental risk assessments are desired. This paper introduces a general framework for risk assessment and remedial policy evaluation using predictive modeling. Ecological risk assessment requires evaluation of the radionuclide distribution in ecosystems. The FORESTPATH model is used for predicting the radionuclide fate in forest compartments after deposition as well as for evaluating the efficiency of remedial policies. Time of intervention and radionuclide deposition profile was predicted as being crucial for the remediation efficiency. Risk assessment conducted for a critical group of forest users in Belarus shows that consumption of forest products (berries and mushrooms) leads to about 0.004% risk of a fatal cancer annually. Cost-benefit analysis for forest cleanup suggests that complete removal of organic layer is too expensive for application in Belarus and a better methodology is required. In conclusion, FORESTPATH modeling framework could have wide applications in environmental remediation of radionuclides and toxic metals as well as in dose reconstruction and, risk-assessment

  8. Monte Carlo method for critical systems in infinite volume: The planar Ising model.

    Science.gov (United States)

    Herdeiro, Victor; Doyon, Benjamin

    2016-10-01

    In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three-, and four-point of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.

  9. General correlation for prediction of critical heat flux ratio in water cooled channels

    Energy Technology Data Exchange (ETDEWEB)

    Pernica, R.; Cizek, J.

    1995-09-01

    The paper present the general empirical Critical Heat Flux Ration (CHFR) correlation which is valid for vertical water upflow through tubes, internally heated concentric annuli and rod bundles geometries with both wide and very tight square and triangular rods lattices. The proposed general PG correlation directly predicts the CHFR, it comprises axial and radial non-uniform heating, and is valid in a wider range of thermal hydraulic conditions than previously published critical heat flux correlations. The PG correlation has been developed using the critical heat flux Czech data bank which includes more than 9500 experimental data on tubes, 7600 data on rod bundles and 713 data on internally heated concentric annuli. Accuracy of the CHFR prediction, statistically assessed by the constant dryout conditions approach, is characterized by the mean value nearing 1.00 and the standard deviation less than 0.06. Moverover, a subchannel form of the PG correlations is statistically verified on Westinghouse and Combustion Engineering rod bundle data bases, i.e. more than 7000 experimental CHF points of Columbia University data bank were used.

  10. Prediction of the Critical Curvature for LX-17 with the Time of Arrival Data from DNS

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Jin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fried, Laurence E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moss, William C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-01-10

    We extract the detonation shock front velocity, curvature and acceleration from time of arrival data measured at grid points from direct numerical simulations of a 50mm rate-stick lit by a disk-source, with the ignition and growth reaction model and a JWL equation of state calibrated for LX-17. We compute the quasi-steady (D, κ) relation based on the extracted properties and predicted the critical curvatures of LX-17. We also proposed an explicit formula that contains the failure turning point, obtained from optimization for the (D, κ) relation of LX-17.

  11. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  12. Uncertainties in modelling and scaling of critical flows and pump model in TRAC-PF1/MOD1

    International Nuclear Information System (INIS)

    Rohatgi, U.S.; Yu, Wen-Shi.

    1987-01-01

    The USNRC has established a Code Scalability, Applicability and Uncertainty (CSAU) evaluation methodology to quantify the uncertainty in the prediction of safety parameters by the best estimate codes. These codes can then be applied to evaluate the Emergency Core Cooling System (ECCS). The TRAC-PF1/MOD1 version was selected as the first code to undergo the CSAU analysis for LBLOCA applications. It was established through this methodology that break flow and pump models are among the top ranked models in the code affecting the peak clad temperature (PCT) prediction for LBLOCA. The break flow model bias or discrepancy and the uncertainty were determined by modelling the test section near the break for 12 Marviken tests. It was observed that the TRAC-PF1/MOD1 code consistently underpredicts the break flow rate and that the prediction improved with increasing pipe length (larger L/D). This is true for both subcooled and two-phase critical flows. A pump model was developed from Westinghouse (1/3 scale) data. The data represent the largest available test pump relevant to Westinghouse PWRs. It was then shown through the analysis of CE and CREARE pump data that larger pumps degrade less and also that pumps degrade less at higher pressures. Since the model developed here is based on the 1/3 scale pump and on low pressure data, it is conservative and will overpredict the degradation when applied to PWRs

  13. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs.

    Directory of Open Access Journals (Sweden)

    Qifan Kuang

    Full Text Available Early and accurate identification of adverse drug reactions (ADRs is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs.In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper.Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  14. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  15. A New Method to Detect and Correct the Critical Errors and Determine the Software-Reliability in Critical Software-System

    International Nuclear Information System (INIS)

    Krini, Ossmane; Börcsök, Josef

    2012-01-01

    In order to use electronic systems comprising of software and hardware components in safety related and high safety related applications, it is necessary to meet the Marginal risk numbers required by standards and legislative provisions. Existing processes and mathematical models are used to verify the risk numbers. On the hardware side, various accepted mathematical models, processes, and methods exist to provide the required proof. To this day, however, there are no closed models or mathematical procedures known that allow for a dependable prediction of software reliability. This work presents a method that makes a prognosis on the residual critical error number in software. Conventional models lack this ability and right now, there are no methods that forecast critical errors. The new method will show that an estimate of the residual error number of critical errors in software systems is possible by using a combination of prediction models, a ratio of critical errors, and the total error number. Subsequently, the critical expected value-function at any point in time can be derived from the new solution method, provided the detection rate has been calculated using an appropriate estimation method. Also, the presented method makes it possible to make an estimate on the critical failure rate. The approach is modelled on a real process and therefore describes two essential processes - detection and correction process.

  16. Comparison of the Full Outline of UnResponsiveness score and the Glasgow Coma Scale in predicting mortality in critically ill patients*.

    Science.gov (United States)

    Wijdicks, Eelco F M; Kramer, Andrew A; Rohs, Thomas; Hanna, Susan; Sadaka, Farid; O'Brien, Jacklyn; Bible, Shonna; Dickess, Stacy M; Foss, Michelle

    2015-02-01

    Impaired consciousness has been incorporated in prediction models that are used in the ICU. The Glasgow Coma Scale has value but is incomplete and cannot be assessed in intubated patients accurately. The Full Outline of UnResponsiveness score may be a better predictor of mortality in critically ill patients. Thirteen ICUs at five U.S. hospitals. One thousand six hundred ninety-five consecutive unselected ICU admissions during a six-month period in 2012. Glasgow Coma Scale and Full Outline of UnResponsiveness score were recorded within 1 hour of admission. Baseline characteristics and physiologic components of the Acute Physiology and Chronic Health Evaluation system, as well as mortality were linked to Glasgow Coma Scale/Full Outline of UnResponsiveness score information. None. We recruited 1,695 critically ill patients, of which 1,645 with complete data could be linked to data in the Acute Physiology and Chronic Health Evaluation system. The area under the receiver operating characteristic curve of predicting ICU mortality using the Glasgow Coma Scale was 0.715 (95% CI, 0.663-0.768) and using the Full Outline of UnResponsiveness score was 0.742 (95% CI, 0.694-0.790), statistically different (p = 0.001). A similar but nonsignificant difference was found for predicting hospital mortality (p = 0.078). The respiratory and brainstem reflex components of the Full Outline of UnResponsiveness score showed a much wider range of mortality than the verbal component of Glasgow Coma Scale. In multivariable models, the Full Outline of UnResponsiveness score was more useful than the Glasgow Coma Scale for predicting mortality. The Full Outline of UnResponsiveness score might be a better prognostic tool of ICU mortality than the Glasgow Coma Scale in critically ill patients, most likely a result of incorporating brainstem reflexes and respiration into the Full Outline of UnResponsiveness score.

  17. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  18. Comparison of mortality prediction models and validation of SAPS II in critically ill burns patients.

    Science.gov (United States)

    Pantet, O; Faouzi, M; Brusselaers, N; Vernay, A; Berger, M M

    2016-06-30

    Specific burn outcome prediction scores such as the Abbreviated Burn Severity Index (ABSI), Ryan, Belgian Outcome of Burn Injury (BOBI) and revised Baux scores have been extensively studied. Validation studies of the critical care score SAPS II (Simplified Acute Physiology Score) have included burns patients but not addressed them as a cohort. The study aimed at comparing their performance in a Swiss burns intensive care unit (ICU) and to observe whether they were affected by a standardized definition of inhalation injury. We conducted a retrospective cohort study, including all consecutive ICU burn admissions (n=492) between 1996 and 2013: 5 epochs were defined by protocol changes. As required for SAPS II calculation, stays burned (TBSA) and inhalation injury (systematic standardized diagnosis since 2006). Study epochs were compared (χ2 test, ANOVA). Score performance was assessed by receiver operating characteristic curve analysis. SAPS II performed well (AUC 0.89), particularly in burns burns <40% TBSA. Ryan and BOBI scores were least accurate, as they heavily weight inhalation injury.

  19. A 3-D CFD approach to the mechanistic prediction of forced convective critical heat flux at low quality

    International Nuclear Information System (INIS)

    Jean-Marie Le Corre; Cristina H Amon; Shi-Chune Yao

    2005-01-01

    Full text of publication follows: The prediction of the Critical Heat Flux (CHF) in a heat flux controlled boiling heat exchanger is important to assess the maximal thermal capability of the system. In the case of a nuclear reactor, CHF margin gain (using improved mixing vane grid design, for instance) can allow power up-rate and enhanced operating flexibility. In general, current nuclear core design procedures use quasi-1D approach to model the coolant thermal-hydraulic conditions within the fuel bundles coupled with fully empirical CHF prediction methods. In addition, several CHF mechanistic models have been developed in the past and coupled with 1D and quasi-1D thermal-hydraulic codes. These mechanistic models have demonstrated reasonable CHF prediction characteristics and, more remarkably, correct parametric trends over wide range of fluid conditions. However, since the phenomena leading to CHF are localized near the heater, models are needed to relate local quantities of interest to area-averaged quantities. As a consequence, large CHF prediction uncertainties may be introduced and 3D fluid characteristics (such as swirling flow) cannot be accounted properly. Therefore, a fully mechanistic approach to CHF prediction is, in general, not possible using the current approach. The development of CHF-enhanced fuel assembly designs requires the use of more advanced 3D coolant properties computations coupled with a CHF mechanistic modeling. In the present work, the commercial CFD code CFX-5 is used to compute 3D coolant conditions in a vertical heated tube with upward flow. Several CHF mechanistic models at low quality available in the literature are coupled with the CFD code by developing adequate models between local coolant properties and local parameters of interest to predict CHF. The prediction performances of these models are assessed using CHF databases available in the open literature and the 1995 CHF look-up table. Since CFD can reasonably capture 3D fluid

  20. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  1. Bayesian Poisson hierarchical models for crash data analysis: Investigating the impact of model choice on site-specific predictions.

    Science.gov (United States)

    Khazraee, S Hadi; Johnson, Valen; Lord, Dominique

    2018-08-01

    The Poisson-gamma (PG) and Poisson-lognormal (PLN) regression models are among the most popular means for motor vehicle crash data analysis. Both models belong to the Poisson-hierarchical family of models. While numerous studies have compared the overall performance of alternative Bayesian Poisson-hierarchical models, little research has addressed the impact of model choice on the expected crash frequency prediction at individual sites. This paper sought to examine whether there are any trends among candidate models predictions e.g., that an alternative model's prediction for sites with certain conditions tends to be higher (or lower) than that from another model. In addition to the PG and PLN models, this research formulated a new member of the Poisson-hierarchical family of models: the Poisson-inverse gamma (PIGam). Three field datasets (from Texas, Michigan and Indiana) covering a wide range of over-dispersion characteristics were selected for analysis. This study demonstrated that the model choice can be critical when the calibrated models are used for prediction at new sites, especially when the data are highly over-dispersed. For all three datasets, the PIGam model would predict higher expected crash frequencies than would the PLN and PG models, in order, indicating a clear link between the models predictions and the shape of their mixing distributions (i.e., gamma, lognormal, and inverse gamma, respectively). The thicker tail of the PIGam and PLN models (in order) may provide an advantage when the data are highly over-dispersed. The analysis results also illustrated a major deficiency of the Deviance Information Criterion (DIC) in comparing the goodness-of-fit of hierarchical models; models with drastically different set of coefficients (and thus predictions for new sites) may yield similar DIC values, because the DIC only accounts for the parameters in the lowest (observation) level of the hierarchy and ignores the higher levels (regression coefficients

  2. SLE in self-dual critical Z(N) spin systems: CFT predictions

    International Nuclear Information System (INIS)

    Santachiara, Raoul

    2008-01-01

    The Schramm-Loewner evolution (SLE) describes the continuum limit of domain walls at phase transitions in two-dimensional statistical systems. We consider here the SLE in Z(N) spin models at their self-dual critical point. For N=2 and N=3 these models correspond to the Ising and three-state Potts model. For N≥4 the critical self-dual Z(N) spin models are described in the continuum limit by non-minimal conformal field theories with central charge c≥1. By studying the representations of the corresponding chiral algebra, we show that two particular operators satisfy a two level null vector condition which, for N≥4, presents an additional term coming from the extra symmetry currents action. For N=2,3 these operators correspond to the boundary conditions changing operators associated to the SLE 16/3 (Ising model) and to the SLE 24/5 and SLE 10/3 (three-state Potts model). We suggest a definition of the interfaces within the Z(N) lattice models. The scaling limit of these interfaces is expected to be described at the self-dual critical point and for N≥4 by the SLE 4(N+1)/(N+2) and SLE 4(N+2)/(N+1) processes

  3. A probabilistic model to predict clinical phenotypic traits from genome sequencing.

    Science.gov (United States)

    Chen, Yun-Ching; Douville, Christopher; Wang, Cheng; Niknafs, Noushin; Yeo, Grace; Beleva-Guthrie, Violeta; Carter, Hannah; Stenson, Peter D; Cooper, David N; Li, Biao; Mooney, Sean; Karchin, Rachel

    2014-09-01

    Genetic screening is becoming possible on an unprecedented scale. However, its utility remains controversial. Although most variant genotypes cannot be easily interpreted, many individuals nevertheless attempt to interpret their genetic information. Initiatives such as the Personal Genome Project (PGP) and Illumina's Understand Your Genome are sequencing thousands of adults, collecting phenotypic information and developing computational pipelines to identify the most important variant genotypes harbored by each individual. These pipelines consider database and allele frequency annotations and bioinformatics classifications. We propose that the next step will be to integrate these different sources of information to estimate the probability that a given individual has specific phenotypes of clinical interest. To this end, we have designed a Bayesian probabilistic model to predict the probability of dichotomous phenotypes. When applied to a cohort from PGP, predictions of Gilbert syndrome, Graves' disease, non-Hodgkin lymphoma, and various blood groups were accurate, as individuals manifesting the phenotype in question exhibited the highest, or among the highest, predicted probabilities. Thirty-eight PGP phenotypes (26%) were predicted with area-under-the-ROC curve (AUC)>0.7, and 23 (15.8%) of these were statistically significant, based on permutation tests. Moreover, in a Critical Assessment of Genome Interpretation (CAGI) blinded prediction experiment, the models were used to match 77 PGP genomes to phenotypic profiles, generating the most accurate prediction of 16 submissions, according to an independent assessor. Although the models are currently insufficiently accurate for diagnostic utility, we expect their performance to improve with growth of publicly available genomics data and model refinement by domain experts.

  4. Systems modeling and simulation applications for critical care medicine

    Science.gov (United States)

    2012-01-01

    Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area. PMID:22703718

  5. Systems modeling and simulation applications for critical care medicine.

    Science.gov (United States)

    Dong, Yue; Chbat, Nicolas W; Gupta, Ashish; Hadzikadic, Mirsad; Gajic, Ognjen

    2012-06-15

    Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area.

  6. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  7. Assessment of data-assisted prediction by inclusion of crosslinking/mass-spectrometry and small angle X-ray scattering data in the 12th Critical Assessment of protein Structure Prediction experiment.

    Science.gov (United States)

    Tamò, Giorgio E; Abriata, Luciano A; Fonti, Giulia; Dal Peraro, Matteo

    2018-03-01

    Integrative modeling approaches attempt to combine experiments and computation to derive structure-function relationships in complex molecular assemblies. Despite their importance for the advancement of life sciences, benchmarking of existing methodologies is rather poor. The 12 th round of the Critical Assessment of protein Structure Prediction (CASP) offered a unique niche to benchmark data and methods from two kinds of experiments often used in integrative modeling, namely residue-residue contacts obtained through crosslinking/mass-spectrometry (CLMS), and small-angle X-ray scattering (SAXS) experiments. Upon assessment of the models submitted by predictors for 3 targets assisted by CLMS data and 11 targets by SAXS data, we observed no significant improvement when compared to the best data-blind models, although most predictors did improve relative to their own data-blind predictions. Only for target Tx892 of the CLMS-assisted category and for target Ts947 of the SAXS-assisted category, there was a net, albeit mild, improvement relative to the best data-blind predictions. We discuss here possible reasons for the relatively poor success, which point rather to inconsistencies in the data sources rather than in the methods, to which a few groups were less sensitive. We conclude with suggestions that could improve the potential of data integration in future CASP rounds in terms of experimental data production, methods development, data management and prediction assessment. © 2017 Wiley Periodicals, Inc.

  8. Template-based and free modeling of I-TASSER and QUARK pipelines using predicted contact maps in CASP12.

    Science.gov (United States)

    Zhang, Chengxin; Mortuza, S M; He, Baoji; Wang, Yanting; Zhang, Yang

    2018-03-01

    We develop two complementary pipelines, "Zhang-Server" and "QUARK", based on I-TASSER and QUARK pipelines for template-based modeling (TBM) and free modeling (FM), and test them in the CASP12 experiment. The combination of I-TASSER and QUARK successfully folds three medium-size FM targets that have more than 150 residues, even though the interplay between the two pipelines still awaits further optimization. Newly developed sequence-based contact prediction by NeBcon plays a critical role to enhance the quality of models, particularly for FM targets, by the new pipelines. The inclusion of NeBcon predicted contacts as restraints in the QUARK simulations results in an average TM-score of 0.41 for the best in top five predicted models, which is 37% higher than that by the QUARK simulations without contacts. In particular, there are seven targets that are converted from non-foldable to foldable (TM-score >0.5) due to the use of contact restraints in the simulations. Another additional feature in the current pipelines is the local structure quality prediction by ResQ, which provides a robust residue-level modeling error estimation. Despite the success, significant challenges still remain in ab initio modeling of multi-domain proteins and folding of β-proteins with complicated topologies bound by long-range strand-strand interactions. Improvements on domain boundary and long-range contact prediction, as well as optimal use of the predicted contacts and multiple threading alignments, are critical to address these issues seen in the CASP12 experiment. © 2017 Wiley Periodicals, Inc.

  9. The Biomantle-Critical Zone Model

    Science.gov (United States)

    Johnson, D. L.; Lin, H.

    2006-12-01

    It is a fact that established fields, like geomorphology, soil science, and pedology, which treat near surface and surface processes, are undergoing conceptual changes. Disciplinary self examinations are rife. New practitioners are joining these fields, bringing novel and interdisciplinary ideas. Such new names as "Earth's critical zone," "near surface geophysics," and "weathering engine" are being coined for research groups. Their agendas reflect an effort to integrate and reenergize established fields and break new ground. The new discipline "hydropedology" integrates soil science with hydrologic principles, and recent biodynamic investigations have spawned "biomantle" concepts and principles. One force behind these sea shifts may be retrospectives whereby disciplines periodically re-invent themselves to meet new challenges. Such retrospectives may be manifest in the recent Science issue on "Soils, The Final Frontier" (11 June, 2004), and in recent National Research Council reports that have set challenges to science for the next three decades (Basic Research Opportunities in Earth Science, and Grand Challenges for the Environmental Sciences, both published in 2001). In keeping with such changes, we advocate the integration of biomantle and critical zone concepts into a general model of Earth's soil. (The scope of the model automatically includes the domain of hydropedology.) Our justification is that the integration makes for a more appealing holistic, and realistic, model for the domain of Earth's soil at any scale. The focus is on the biodynamics of the biomantle and water flow within the critical zone. In this general model the biomantle is the epidermis of the critical zone, which extends to the base of the aquifer. We define soil as the outer layer of landforms on planets and similar bodies altered by biological, chemical, and/or physical agents. Because Earth is the only planet with biological agents, as far as we know, it is the only one that has all

  10. Teaching for Art Criticism: Incorporating Feldman's Critical Analysis Learning Model in Students' Studio Practice

    Science.gov (United States)

    Subramaniam, Maithreyi; Hanafi, Jaffri; Putih, Abu Talib

    2016-01-01

    This study adopted 30 first year graphic design students' artwork, with critical analysis using Feldman's model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students' performances in their critical ability.…

  11. Non-equilibrium effects on the two-phase flow critical phenomenon

    International Nuclear Information System (INIS)

    Sami, S.M.

    1988-01-01

    In the present study, the choking criterion for nonhomogeneous nonequilibrium two phase flow is obtained by solving the two-fluid model conservation equations. The method of characteristics is employed to predict the critical flow conditions. Critical flow is established after the magnitude of the characteristic slopes (velocities). Critical flow conditions are reached when the smallest characteristic slope becomes equal to zero. Several expression are developed to determine the nonequilibrium mass and heat exchanges in terms of the system dependent parameters derivatives. In addition, comprehensive transition flow regime maps are employed in the calculation of interfacial heat and momentum transfer rates. Numerical results reveal that the proposed model reliably predicts the critical two-phase flow phenomenon under different inlet conditions and compares well with other existing models

  12. Temperature modelling and prediction for activated sludge systems.

    Science.gov (United States)

    Lippi, S; Rosso, D; Lubello, C; Canziani, R; Stenstrom, M K

    2009-01-01

    Temperature is an important factor affecting biomass activity, which is critical to maintain efficient biological wastewater treatment, and also physiochemical properties of mixed liquor as dissolved oxygen saturation and settling velocity. Controlling temperature is not normally possible for treatment systems but incorporating factors impacting temperature in the design process, such as aeration system, surface to volume ratio, and tank geometry can reduce the range of temperature extremes and improve the overall process performance. Determining how much these design or up-grade options affect the tank temperature requires a temperature model that can be used with existing design methodologies. This paper presents a new steady state temperature model developed by incorporating the best aspects of previously published models, introducing new functions for selected heat exchange paths and improving the method for predicting the effects of covering aeration tanks. Numerical improvements with embedded reference data provide simpler formulation, faster execution, easier sensitivity analyses, using an ordinary spreadsheet. The paper presents several cases to validate the model.

  13. Model structural uncertainty quantification and hydrologic parameter and prediction error analysis using airborne electromagnetic data

    DEFF Research Database (Denmark)

    Minsley, B. J.; Christensen, Nikolaj Kruse; Christensen, Steen

    Model structure, or the spatial arrangement of subsurface lithological units, is fundamental to the hydrological behavior of Earth systems. Knowledge of geological model structure is critically important in order to make informed hydrological predictions and management decisions. Model structure...... is never perfectly known, however, and incorrect assumptions can be a significant source of error when making model predictions. We describe a systematic approach for quantifying model structural uncertainty that is based on the integration of sparse borehole observations and large-scale airborne...... electromagnetic (AEM) data. Our estimates of model structural uncertainty follow a Bayesian framework that accounts for both the uncertainties in geophysical parameter estimates given AEM data, and the uncertainties in the relationship between lithology and geophysical parameters. Using geostatistical sequential...

  14. Self-organised criticality in the evolution of a thermodynamic model of rodent thermoregulatory huddling.

    Directory of Open Access Journals (Sweden)

    Stuart P Wilson

    2017-01-01

    Full Text Available A thermodynamic model of thermoregulatory huddling interactions between endotherms is developed. The model is presented as a Monte Carlo algorithm in which animals are iteratively exchanged between groups, with a probability of exchanging groups defined in terms of the temperature of the environment and the body temperatures of the animals. The temperature-dependent exchange of animals between groups is shown to reproduce a second-order critical phase transition, i.e., a smooth switch to huddling when the environment gets colder, as measured in recent experiments. A peak in the rate at which group sizes change, referred to as pup flow, is predicted at the critical temperature of the phase transition, consistent with a thermodynamic description of huddling, and with a description of the huddle as a self-organising system. The model was subjected to a simple evolutionary procedure, by iteratively substituting the physiologies of individuals that fail to balance the costs of thermoregulation (by huddling in groups with the costs of thermogenesis (by contributing heat. The resulting tension between cooperative and competitive interactions was found to generate a phenomenon called self-organised criticality, as evidenced by the emergence of avalanches in fitness that propagate across many generations. The emergence of avalanches reveals how huddling can introduce correlations in fitness between individuals and thereby constrain evolutionary dynamics. Finally, a full agent-based model of huddling interactions is also shown to generate criticality when subjected to the same evolutionary pressures. The agent-based model is related to the Monte Carlo model in the way that a Vicsek model is related to an Ising model in statistical physics. Huddling therefore presents an opportunity to use thermodynamic theory to study an emergent adaptive animal behaviour. In more general terms, huddling is proposed as an ideal system for investigating the interaction

  15. A Simple Predictive Method of Critical Flicker Detection for Human Healthy Precaution

    Directory of Open Access Journals (Sweden)

    Goh Zai Peng

    2015-01-01

    Full Text Available Interharmonics and flickers have an interrelationship between each other. Based on International Electrotechnical Commission (IEC flicker standard, the critical flicker frequency for a human eye is located at 8.8 Hz. Additionally, eye strains, headaches, and in the worst case seizures may happen due to the critical flicker. Therefore, this paper introduces a worthwhile research gap on the investigation of interrelationship between the amplitudes of the interharmonics and the critical flicker for 50 Hz power system. Consequently, the significant findings obtained in this paper are the amplitudes of two particular interharmonics are able to detect the critical flicker. In this paper, the aforementioned amplitudes are detected by adaptive linear neuron (ADALINE. After that, the critical flicker is detected by substituting the aforesaid amplitudes to the formulas that have been generated in this paper accordingly. Simulation and experimental works are conducted and the accuracy of the proposed algorithm which utilizes ADALINE is similar, as compared to typical Fluke power analyzer. In a nutshell, this simple predictive method for critical flicker detection has strong potential to be applied in any human crowded places (such as offices, shopping complexes, and stadiums for human healthy precaution purpose due to its simplicity.

  16. External validation of multivariable prediction models: a systematic review of methodological conduct and reporting

    Science.gov (United States)

    2014-01-01

    Background Before considering whether to use a multivariable (diagnostic or prognostic) prediction model, it is essential that its performance be evaluated in data that were not used to develop the model (referred to as external validation). We critically appraised the methodological conduct and reporting of external validation studies of multivariable prediction models. Methods We conducted a systematic review of articles describing some form of external validation of one or more multivariable prediction models indexed in PubMed core clinical journals published in 2010. Study data were extracted in duplicate on design, sample size, handling of missing data, reference to the original study developing the prediction models and predictive performance measures. Results 11,826 articles were identified and 78 were included for full review, which described the evaluation of 120 prediction models. in participant data that were not used to develop the model. Thirty-three articles described both the development of a prediction model and an evaluation of its performance on a separate dataset, and 45 articles described only the evaluation of an existing published prediction model on another dataset. Fifty-seven percent of the prediction models were presented and evaluated as simplified scoring systems. Sixteen percent of articles failed to report the number of outcome events in the validation datasets. Fifty-four percent of studies made no explicit mention of missing data. Sixty-seven percent did not report evaluating model calibration whilst most studies evaluated model discrimination. It was often unclear whether the reported performance measures were for the full regression model or for the simplified models. Conclusions The vast majority of studies describing some form of external validation of a multivariable prediction model were poorly reported with key details frequently not presented. The validation studies were characterised by poor design, inappropriate handling

  17. End-of-Discharge and End-of-Life Prediction in Lithium-Ion Batteries with Electrochemistry-Based Aging Models

    Science.gov (United States)

    Daigle, Matthew; Kulkarni, Chetan S.

    2016-01-01

    As batteries become increasingly prevalent in complex systems such as aircraft and electric cars, monitoring and predicting battery state of charge and state of health becomes critical. In order to accurately predict the remaining battery power to support system operations for informed operational decision-making, age-dependent changes in dynamics must be accounted for. Using an electrochemistry-based model, we investigate how key parameters of the battery change as aging occurs, and develop models to describe aging through these key parameters. Using these models, we demonstrate how we can (i) accurately predict end-of-discharge for aged batteries, and (ii) predict the end-of-life of a battery as a function of anticipated usage. The approach is validated through an experimental set of randomized discharge profiles.

  18. Criticality predicts maximum irregularity in recurrent networks of excitatory nodes.

    Directory of Open Access Journals (Sweden)

    Yahya Karimipanah

    Full Text Available A rigorous understanding of brain dynamics and function requires a conceptual bridge between multiple levels of organization, including neural spiking and network-level population activity. Mounting evidence suggests that neural networks of cerebral cortex operate at a critical regime, which is defined as a transition point between two phases of short lasting and chaotic activity. However, despite the fact that criticality brings about certain functional advantages for information processing, its supporting evidence is still far from conclusive, as it has been mostly based on power law scaling of size and durations of cascades of activity. Moreover, to what degree such hypothesis could explain some fundamental features of neural activity is still largely unknown. One of the most prevalent features of cortical activity in vivo is known to be spike irregularity of spike trains, which is measured in terms of the coefficient of variation (CV larger than one. Here, using a minimal computational model of excitatory nodes, we show that irregular spiking (CV > 1 naturally emerges in a recurrent network operating at criticality. More importantly, we show that even at the presence of other sources of spike irregularity, being at criticality maximizes the mean coefficient of variation of neurons, thereby maximizing their spike irregularity. Furthermore, we also show that such a maximized irregularity results in maximum correlation between neuronal firing rates and their corresponding spike irregularity (measured in terms of CV. On the one hand, using a model in the universality class of directed percolation, we propose new hallmarks of criticality at single-unit level, which could be applicable to any network of excitable nodes. On the other hand, given the controversy of the neural criticality hypothesis, we discuss the limitation of this approach to neural systems and to what degree they support the criticality hypothesis in real neural networks. Finally

  19. Using plural modeling for predicting decisions made by adaptive adversaries

    International Nuclear Information System (INIS)

    Buede, Dennis M.; Mahoney, Suzanne; Ezell, Barry; Lathrop, John

    2012-01-01

    Incorporating an appropriate representation of the likelihood of terrorist decision outcomes into risk assessments associated with weapons of mass destruction attacks has been a significant problem for countries around the world. Developing these likelihoods gets at the heart of the most difficult predictive problems: human decision making, adaptive adversaries, and adversaries about which very little is known. A plural modeling approach is proposed that incorporates estimates of all critical uncertainties: who is the adversary and what skills and resources are available to him, what information is known to the adversary and what perceptions of the important facts are held by this group or individual, what does the adversary know about the countermeasure actions taken by the government in question, what are the adversary's objectives and the priorities of those objectives, what would trigger the adversary to start an attack and what kind of success does the adversary desire, how realistic is the adversary in estimating the success of an attack, how does the adversary make a decision and what type of model best predicts this decision-making process. A computational framework is defined to aggregate the predictions from a suite of models, based on this broad array of uncertainties. A validation approach is described that deals with a significant scarcity of data.

  20. An utilization of liquid sublayer dryout mechanism in predicting critical heat flux under low pressure and low velocity conditions in round tubes

    International Nuclear Information System (INIS)

    Lee, Kwang-Won; Baik, Se-Jin; Ro, Tae-Sun

    2000-01-01

    From a theoretical assessment of extensive critical heat flux (CHF) data under low pressure and low velocity (LPLV) conditions, it was found out that lots of CHF data would not be well predicted by a normal annular film dryout (AFD) mechanism, although their flow patterns were identified as annular-mist flow. To predict these CHF data, a liquid sublayer dryout (LSD) mechanism has been newly utilized in developing the mechanistic CHF model based on each identified CHF mechanism. This mechanism postulates that the CHF occurrence is caused by dryout of the thin liquid sublayer resulting from the annular film separation or breaking down due to nucleate boiling in annular film or hydrodynamic fluctuation. In principle, this mechanism well supports the experimental evidence of residual film flow rate at the CHF location, which can not be explained by the AFD mechanism. For a comparative assessment of each mechanism, the CHF model based on the LSD mechanism is developed together with that based on the AFD mechanism. The validation of these models is performed on the 1406 CHF data points ranging over P=0.1-2 MPa, G=4-499 kg m -2 s -1 , L/D=4-402. This model validation shows that 1055 and 231 CHF data are predicted within ±30 error bound by the LSD mechanism and the AFD mechanism, respectively. However, some CHF data whose critical qualities are <0.4 or whose tube length-to-diameter ratios are <70 are considerably overestimated by the CHF model based on the LSD mechanism. These overestimations seem to be caused by an inadequate CHF mechanism classification and an insufficient consideration of the flow instability effect on CHF. Further studies for a new classification criterion screening the CHF data affected by flow instabilities as well as a new bubble detachment model for LPLV conditions, are needed to improve the model accuracy.

  1. The 4-parameter Compressible Packing Model (CPM) including a critical cavity size ratio

    Science.gov (United States)

    Roquier, Gerard

    2017-06-01

    The 4-parameter Compressible Packing Model (CPM) has been developed to predict the packing density of mixtures constituted by bidisperse spherical particles. The four parameters are: the wall effect and the loosening effect coefficients, the compaction index and a critical cavity size ratio. The two geometrical interactions have been studied theoretically on the basis of a spherical cell centered on a secondary class bead. For the loosening effect, a critical cavity size ratio, below which a fine particle can be inserted into a small cavity created by touching coarser particles, is introduced. This is the only parameter which requires adaptation to extend the model to other types of particles. The 4-parameter CPM demonstrates its efficiency on frictionless glass beads (300 values), spherical particles numerically simulated (20 values), round natural particles (125 values) and crushed particles (335 values) with correlation coefficients equal to respectively 99.0%, 98.7%, 97.8%, 96.4% and mean deviations equal to respectively 0.007, 0.006, 0.007, 0.010.

  2. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  3. Modeling of LVRF Critical Experiments in ZED-2 Using WIMS9A/PANTHER and MCNP5

    International Nuclear Information System (INIS)

    Sissaoui, M.T.; Lebenhaft, J.R; Carlson, P.A.

    2008-01-01

    The accuracy of WIMS9A/PANTHER and MCNP5 in modeling D 2 O-moderated, and H 2 O-, D 2 O- or air-cooled, doubly heterogeneous lattices of fuel clusters was demonstrated using Low Void Reactivity Fuel (LVRF) substitution experiments in the ZED-2 critical facility. MCNP5 with ENDF/B-VI (Release 5) under-predicted k eff but gave excellent coolant void reactivity (CVR) bias values. WIMS9A/PANTHER with JEF-2.2 over-predicted k eff and under-predicted the CVR bias relative to MCNP5 by 100 pcm to 200 pcm. Both codes reproduced the measured axial and radial flux shapes accurately. (authors)

  4. A critical review of clarifier modelling

    DEFF Research Database (Denmark)

    Plósz, Benedek; Nopens, Ingmar; Rieger, Leiv

    This outline paper aims to provide a critical review of secondary settling tank (SST) modelling approaches used in current wastewater engineering and develop tools not yet applied in practice. We address the development of different tier models and experimental techniques in the field...

  5. Model for the resistive critical current transition in composite superconductors

    International Nuclear Information System (INIS)

    Warnes, W.H.

    1988-01-01

    Much of the research investigating technological type-II superconducting composites relies on the measurement of the resistive critical current transition. We have developed a model for the resistive transition which improves on older models by allowing for the very different nature of monofilamentary and multifilamentary composite structures. The monofilamentary model allows for axial current flow around critical current weak links in the superconducting filament. The multifilamentary model incorporates an additional radial current transfer between neighboring filaments. The development of both models is presented. It is shown that the models are useful for extracting more information from the experimental data than was formerly possible. Specific information obtainable from the experimental voltage-current characteristic includes the distribution of critical currents in the composite, the average critical current of the distribution, the range of critical currents in the composite, the field and temperature dependence of the distribution, and the fraction of the composite dissipating energy in flux flow at any current. This additional information about the distribution of critical currents may be helpful in leading toward a better understanding of flux pinning in technological superconductors. Comparison of the models with several experiments is given and shown to be in reasonable agreement. Implications of the models for the measurement of critical currents in technological composites is presented and discussed with reference to basic flux pinning studies in such composites

  6. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning

    Science.gov (United States)

    Fu, QiMing

    2016-01-01

    To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704

  7. Multi-model analysis in hydrological prediction

    Science.gov (United States)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  8. A critical flow model for the Cathena thermalhydraulic code

    International Nuclear Information System (INIS)

    Popov, N.K.; Hanna, B.N.

    1990-01-01

    The calculation of critical flow rate, e.g., of choked flow through a break, is required for simulating a loss of coolant transient in a reactor or reactor-like experimental facility. A model was developed to calculate the flow rate through the break for given geometrical parameters near the break and fluid parameters upstream of the break for ordinary water, as well as heavy water, with or without non- condensible gases. This model has been incorporated in the CATHENA, one-dimensional, two-fluid thermalhydraulic code. In the CATHENA code a standard staggered-mesh, finite-difference representation is used to solve the thermalhydraulic equations. This model compares the fluid mixture velocity, calculated using the CATHENA momentum equations, with a critical velocity. When the mixture velocity is smaller than the critical velocity, the flow is assumed to be subcritical, and the model remains passive. When the fluid mixture velocity is higher than the critical velocity, the model sets the fluid mixture velocity equal to the critical velocity. In this paper the critical velocity at a link (momentum cell) is first estimated separately for single-phase liquid, two- phase, or single-phase gas flow condition at the upstream node (mass/energy cell). In all three regimes non-condensible gas can be present in the flow. For single-phase liquid flow, the critical velocity is estimated using a Bernoulli- type of equation, the pressure at the link is estimated by the pressure undershoot method

  9. Integrating artificial neural networks and empirical correlations for the prediction of water-subcooled critical heat flux

    International Nuclear Information System (INIS)

    Mazzola, A.

    1997-01-01

    The critical heat flux (CHF) is an important parameter for the design of nuclear reactors, heat exchangers and other boiling heat transfer units. Recently, the CHF in water-subcooled flow boiling at high mass flux and subcooling has been thoroughly studied in relation to the cooling of high-heat-flux components in thermonuclear fusion reactors. Due to the specific thermal-hydraulic situation, very few of the existing correlations, originally developed for operating conditions typical of pressurized water reactors, are able to provide consistent predictions of water-subcooled-flow-boiling CHF at high heat fluxes. Therefore, alternative predicting techniques are being investigated. Among these, artificial neural networks (ANN) have the advantage of not requiring a formal model structure to fit the experimental data; however, their main drawbacks are the loss of model transparency ('black-box' character) and the lack of any indicator for evaluating accuracy and reliability of the ANN answer when 'never-seen' patterns are presented. In the present work, the prediction of CHF is approached by a hybrid system which couples a heuristic correlation with a neural network. The ANN role is to predict a datum-dependent parameter required by the analytical correlation; ; this parameter was instead set to a constant value obtained by usual best-fitting techniques when a pure analytical approach was adopted. Upper and lower boundaries can be possibly assigned to the parameter value, thus avoiding the case of unexpected and unpredictable answer failure. The present approach maintains the advantage of the analytical model analysis, and it partially overcomes the 'black-box' character typical of the straight application of ANNs because the neural network role is limited to the correlation tuning. The proposed methodology allows us to achieve accurate results and it is likely to be suitable for thermal-hydraulic and heat transfer data processing. (author)

  10. An analytical model for the prediction of the dynamic response of premixed flames stabilized on a heat-conducting perforated plate

    KAUST Repository

    Kedia, Kushal S.

    2013-01-01

    The dynamic response of a premixed flame stabilized on a heat-conducting perforated plate depends critically on their coupled thermal interaction. The objective of this paper is to develop an analytical model to capture this coupling. The model predicts the mean flame base standoff distance; the flame base area, curvature and speed; and the burner plate temperature given the operating conditions; the mean velocity, temperature and equivalence ratio of the reactants; thermal conductivity and the perforation ratio of the burner. This coupled model is combined with our flame transfer function (FTF) model to predict the dynamic response of the flame to velocity perturbations. We show that modeling the thermal coupling between the flame and the burner, while accounting for the two-dimensionality of the former, is critical to predicting the dynamic response characteristics such as the overshoot in the gain curve (resonant condition) and the phase delay. Good agreement with the numerical and experimental results is demonstrated over a range of conditions. © 2012 The Combustion Institute. Published by Elsevier Inc. All rights reserved.

  11. Speech Intelligibility Prediction Based on Mutual Information

    DEFF Research Database (Denmark)

    Jensen, Jesper; Taal, Cees H.

    2014-01-01

    This paper deals with the problem of predicting the average intelligibility of noisy and potentially processed speech signals, as observed by a group of normal hearing listeners. We propose a model which performs this prediction based on the hypothesis that intelligibility is monotonically related...... to the mutual information between critical-band amplitude envelopes of the clean signal and the corresponding noisy/processed signal. The resulting intelligibility predictor turns out to be a simple function of the mean-square error (mse) that arises when estimating a clean critical-band amplitude using...... a minimum mean-square error (mmse) estimator based on the noisy/processed amplitude. The proposed model predicts that speech intelligibility cannot be improved by any processing of noisy critical-band amplitudes. Furthermore, the proposed intelligibility predictor performs well ( ρ > 0.95) in predicting...

  12. Prediction is difficult, preparation is critical and possible

    DEFF Research Database (Denmark)

    Zilli, Romano; Dalton, Luke; Ooms, Wim

    at the level of the source of infection, transmission pathways, and the outcomes. Changes to such challenges and uncertainties are inevitable and foresight in identifying strategies is required for us to prepare for a sustainable future. The EU-funded Global Network on Infectious Diseases of Animals...... and technological needs, including research capacity and support structures to prevent, control or mitigate animal health and zoonotic challenges for 2030 and beyond. While our ability to predict the future is often limited, being prepared to engage with whatever may happen is critical. Methods: Foresight workshops...... to give an overall list in which transnational data sharing, knowledge transfer, public-private partnerships, vaccinology/immunology, vector control, antimicrobial resistance, socioeconomics, genetics/bioinformatics and utilisation of big data rated highly. Conclusion: The outputs of the STAR...

  13. Critical-Inquiry-Based-Learning: Model of Learning to Promote Critical Thinking Ability of Pre-service Teachers

    Science.gov (United States)

    Prayogi, S.; Yuanita, L.; Wasis

    2018-01-01

    This study aimed to develop Critical-Inquiry-Based-Learning (CIBL) learning model to promote critical thinking (CT) ability of preservice teachers. The CIBL learning model was developed by meeting the criteria of validity, practicality, and effectiveness. Validation of the model involves 4 expert validators through the mechanism of the focus group discussion (FGD). CIBL learning model declared valid to promote CT ability, with the validity level (Va) of 4.20 and reliability (r) of 90,1% (very reliable). The practicality of the model was evaluated when it was implemented that involving 17 of preservice teachers. The CIBL learning model had been declared practice, its measuring from learning feasibility (LF) with very good criteria (LF-score = 4.75). The effectiveness of the model was evaluated from the improvement CT ability after the implementation of the model. CT ability were evaluated using the scoring technique adapted from Ennis-Weir Critical Thinking Essay Test. The average score of CT ability on pretest is - 1.53 (uncritical criteria), whereas on posttest is 8.76 (critical criteria), with N-gain score of 0.76 (high criteria). Based on the results of this study, it can be concluded that developed CIBL learning model is feasible to promote CT ability of preservice teachers.

  14. Consistent Set of Experiments from ICSBEP Handbook for Evaluation of Criticality Calculation Prediction of Apparatus of External Fuel Cycle with Different Fuel

    Energy Technology Data Exchange (ETDEWEB)

    Golovko, Yury E. [FSUE ' SSC RF-IPPE' , 249033, Bondarenko Square 1, Obninsk (Russian Federation)

    2008-07-01

    Experiments with plutonium, low enriched uranium and uranium-233 from the ICSBEP1 Handbook are being considered in this paper. Among these experiments it was selected only those, which seem to be the most relevant to the evaluation of uncertainty of critical mass of mixtures of plutonium or low enriched uranium or uranium-233 with light water. All selected experiments were examined and covariance matrices of criticality uncertainties were developed along with some uncertainties were revised. Statistical analysis of these experiments was performed and some contradictions were discovered and eliminated. Evaluation of accuracy of prediction of criticality calculations was performed using the internally consistent set of experiments with plutonium, low enriched uranium and uranium-233 remained after the statistical analyses. The application objects for the evaluation of calculational prediction of criticality were water-reflected spherical systems of homogeneous aqueous mixtures of plutonium or low enriched uranium or uranium-233 of different concentrations which are simplified models of apparatus of external fuel cycle. It is shows that the procedure allows to considerably reduce uncertainty in k{sub eff} caused by the uncertainties in neutron cross-sections. Also it is shows that the results are practically independent of initial covariance matrices of nuclear data uncertainties. (authors)

  15. Critical geometry of a thermal big bang

    Science.gov (United States)

    Afshordi, Niayesh; Magueijo, João

    2016-11-01

    We explore the space of scalar-tensor theories containing two nonconformal metrics, and find a discontinuity pointing to a "critical" cosmological solution. Due to the different maximal speeds of propagation for matter and gravity, the cosmological fluctuations start off inside the horizon even without inflation, and will more naturally have a thermal origin (since there is never vacuum domination). The critical model makes an unambiguous, nontuned prediction for the spectral index of the scalar fluctuations: nS=0.96478 (64 ) . Considering also that no gravitational waves are produced, we have unveiled the most predictive model on offer. The model has a simple geometrical interpretation as a probe 3-brane embedded in an E AdS2×E3 geometry.

  16. An MEG signature corresponding to an axiomatic model of reward prediction error.

    Science.gov (United States)

    Talmi, Deborah; Fuentemilla, Lluis; Litvak, Vladimir; Duzel, Emrah; Dolan, Raymond J

    2012-01-02

    Optimal decision-making is guided by evaluating the outcomes of previous decisions. Prediction errors are theoretical teaching signals which integrate two features of an outcome: its inherent value and prior expectation of its occurrence. To uncover the magnetic signature of prediction errors in the human brain we acquired magnetoencephalographic (MEG) data while participants performed a gambling task. Our primary objective was to use formal criteria, based upon an axiomatic model (Caplin and Dean, 2008a), to determine the presence and timing profile of MEG signals that express prediction errors. We report analyses at the sensor level, implemented in SPM8, time locked to outcome onset. We identified, for the first time, a MEG signature of prediction error, which emerged approximately 320 ms after an outcome and expressed as an interaction between outcome valence and probability. This signal followed earlier, separate signals for outcome valence and probability, which emerged approximately 200 ms after an outcome. Strikingly, the time course of the prediction error signal, as well as the early valence signal, resembled the Feedback-Related Negativity (FRN). In simultaneously acquired EEG data we obtained a robust FRN, but the win and loss signals that comprised this difference wave did not comply with the axiomatic model. Our findings motivate an explicit examination of the critical issue of timing embodied in computational models of prediction errors as seen in human electrophysiological data. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Assessment of fluid-to-fluid modelling of critical heat flux in horizontal 37-element bundle flows

    International Nuclear Information System (INIS)

    Yang, S.K.

    2006-01-01

    Fluid-to-fluid modelling laws of critical heat flux (CHF) available in the literature were reviewed. The applicability of the fluid-to-fluid modelling laws was assessed using available data ranging from low to high mass fluxes in horizontal 37-element bundles simulating a CANDU fuel string. Correlations consisting of dimensionless similarity groups were derived using modelling fluid data (Freon-12) to predict water CHF data in horizontal 37-element bundles with uniform and non-uniform axial-heat flux distribution (AFD). The results showed that at mass fluxes higher than ∼4,000 kg/m 2 s (water equivalent value), the vertical fluid-to-fluid modelling laws of Ahmad (1973) and Katto (1979) predict water CHF in horizontal 37-element bundles with non-uniform AFD with average errors of 1.4% and 3.0% and RMS errors of 5.9% and 6.1%, respectively. The Francois and Berthoud (2003) fluid-to-fluid modelling law predicts CHF in non-uniformly heated 37-element bundles in the horizontal orientation with an average error of 0.6% and an RMS error of 10.4% over the available range of 2,000 to 6,200 kg/m 2 s. (author)

  18. On the applicability of the critical state model to the description of electromagnetic properties of high-Tc superconductors

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, L.M.; Il' in, N.V.; Voloshin, I.F. (All-Russian Electrical Engineering Inst., Moscow (Russia)); Makarov, N.M.; Yampol' skii, V.A. (Inst. for Radiophysics and Electronics, Ukr. Acad. Sci., Karkov (Ukraine)); Perez Rodriguez, F. (Inst. de Fisica, Univ. Autonoma de Puebla, Rue (Mexico)); Snyder, R.L. (New York State Coll. of Ceramics, Alfred Univ. (United States))

    1993-02-20

    The frequency dependence of the surface impedance of superconductors have been studied experimentally and theoretically in the radio frequency range. Its essential deviation was found from the linear law predicted by the usual critical state model. The character of this deviation depends qualitatively on the amplitude of the radio wave. We have established the frequency limits of applicability of the traditional critical state model. Results obtained print out an explanation in the frame of the modified model where we take into account the contribution of a dissipative term to the screening current. The value of this is connected with the V-I plot of the superconductor, so it is possible to obtain information about the V-I characteristics by the contactless method. (orig.).

  19. Assessing the ability of operational snow models to predict snowmelt runoff extremes (Invited)

    Science.gov (United States)

    Wood, A. W.; Restrepo, P. J.; Clark, M. P.

    2013-12-01

    In the western US, the snow accumulation and melt cycle of winter and spring plays a critical role in the region's water management strategies. Consequently, the ability to predict snowmelt runoff at time scales from days to seasons is a key input for decisions in reservoir management, whether for avoiding flood hazards or supporting environmental flows through the scheduling of releases in spring, or for allocating releases for multi-state water distribution in dry seasons of year (using reservoir systems to provide an invaluable buffer for many sectors against drought). Runoff forecasts thus have important benefits at both wet and dry extremes of the climatological spectrum. The importance of the prediction of the snow cycle motivates an assessment of the strengths and weaknesses of the US's central operational snow model, SNOW17, in contrast to process-modeling alternatives, as they relate to simulating observed snowmelt variability and extremes. To this end, we use a flexible modeling approach that enables an investigation of different choices in model structure, including model physics, parameterization and degree of spatiotemporal discretization. We draw from examples of recent extreme events in western US watersheds and an overall assessment of retrospective model performance to identify fruitful avenues for advancing the modeling basis for the operational prediction of snow-related runoff extremes.

  20. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  1. Handling Uncertainty in Social Lending Credit Risk Prediction with a Choquet Fuzzy Integral Model

    OpenAIRE

    Namvar, Anahita; Naderpour, Mohsen

    2018-01-01

    As one of the main business models in the financial technology field, peer-to-peer (P2P) lending has disrupted traditional financial services by providing an online platform for lending money that has remarkably reduced financial costs. However, the inherent uncertainty in P2P loans can result in huge financial losses for P2P platforms. Therefore, accurate risk prediction is critical to the success of P2P lending platforms. Indeed, even a small improvement in credit risk prediction would be o...

  2. Causal explanation, intentionality, and prediction: Evaluating the Criticism of "Deductivism"

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2001-01-01

    In a number of influential contributions, Tony Lawson has attacked a view of science that he refers to as deductivism, and criticized economists for implicitly using it in their research. Lawson argues that deductivism is simply the covering-law model, also known as the causal model of scientific...... critisizes the use of universal laws in social science, especially in economics. This view cannot be as easily dismissed as his general criticism of causal explanation. We argue that a number of arguments often used against the existence of (correct) universal laws in the social sciences can be put...... into question. First, it is argued that entities need not be identical, or even remotely alike, to be applicable to the same law. What is necessary is that they have common properties, e.g. mass in physics, and that the law relates to that property (section 6). Second, one might take the so-called model...

  3. Outcome evaluation of a new model of critical care orientation.

    Science.gov (United States)

    Morris, Linda L; Pfeifer, Pamela; Catalano, Rene; Fortney, Robert; Nelson, Greta; Rabito, Robb; Harap, Rebecca

    2009-05-01

    The shortage of critical care nurses and the service expansion of 2 intensive care units provided a unique opportunity to create a new model of critical care orientation. The goal was to design a program that assessed critical thinking, validated competence, and provided learning pathways that accommodated diverse experience. To determine the effect of a new model of critical care orientation on satisfaction, retention, turnover, vacancy, preparedness to manage patient care assignment, length of orientation, and cost of orientation. A prospective, quasi-experimental design with both quantitative and qualitative methods. The new model improved satisfaction scores, retention rates, and recruitment of critical care nurses. Length of orientation was unchanged. Cost was increased, primarily because a full-time education consultant was added. A new model for nurse orientation that was focused on critical thinking and competence validation improved retention and satisfaction and serves as a template for orientation of nurses throughout the medical center.

  4. Teaching For Art Criticism: Incorporating Feldman’s Critical Analysis Learning Model In Students’ Studio Practice

    OpenAIRE

    Maithreyi Subramaniam; Jaffri Hanafi; Abu Talib Putih

    2016-01-01

    This study adopted 30 first year graphic design students’ artwork, with critical analysis using Feldman’s model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students’ performances in their critical ability. Pearson Correlation Coefficient was used to find out the correlation between students’ studio practice and art critical ability scores. The...

  5. A dry-spot model of critical heat flux and transition boiling in pool and subcooled forced convection boiling

    International Nuclear Information System (INIS)

    Ha, Sang Jun

    1998-02-01

    A new dry-spot model for critical heat flux (CHF) is proposed. The new concept for dry area formation based on Poisson distribution of active nucleation sites and the critical active site number is introduced. The model is based on the boiling phenomena observed in nucleate boiling such as Poisson distribution of active nucleation sites and formation of dry spots on the heating surface. It is hypothesized that when the number of bubbles surrounding one bubble exceeds a critical number, the surrounding bubbles restrict the feed of liquid to the microlayer under the bubble. Then a dry spot of vapor will form on the heated surface. As the surface temperature is raised, more and more bubbles will have a population of surrounding active sites over the critical number. Consequently, the number of the spots will increase and the size of dry areas will increase due to merger of several dry spots. If this trend continues, the number of effective sites for heat transport through the wall will diminish, and CHF and transition boiling occur. The model is applicable to pool and subcooled forced convection boiling conditions, based on the common mechanism that CHF and transition boiling are caused by the accumulation and coalescences of dry spots. It is shown that CHF and heat flux in transition boiling can be determined without any empirical parameter based on information on the boiling parameters such as active site density and bubble diameter, etc., in nucleate boiling. It is also shown that the present model well represents actual phenomena on CHF and transition boiling and explains the mechanism on how parameters such as flow modes (pool or flow) and surface wettability influence CHF and transition boiling. Validation of the present model for CHF and transition boiling is achieved without any tuning parameter always present in earlier models. It is achieved by comparing the predictions of CHF and heat flux in transition boiling using measured boiling parameters in nucleate

  6. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  7. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Predicting Phosphorus Dynamics Across Physiographic Regions Using a Mixed Hortonian Non-Hortonian Hydrology Model

    Science.gov (United States)

    Collick, A.; Easton, Z. M.; Auerbach, D.; Buchanan, B.; Kleinman, P. J. A.; Fuka, D.

    2017-12-01

    Predicting phosphorus (P) loss from agricultural watersheds depends on accurate representation of the hydrological and chemical processes governing P mobility and transport. In complex landscapes, P predictions are complicated by a broad range of soils with and without restrictive layers, a wide variety of agricultural management, and variable hydrological drivers. The Soil and Water Assessment Tool (SWAT) is a watershed model commonly used to predict runoff and non-point source pollution transport, but is commonly only used with Hortonian (traditional SWAT) or non-Hortonian (SWAT-VSA) initializations. Many shallow soils underlain by a restricting layer commonly generate saturation excess runoff from variable source areas (VSA), which is well represented in a re-conceptualized version, SWAT-VSA. However, many watersheds exhibit traits of both infiltration excess and saturation excess hydrology internally, based on the hydrologic distance from the stream, distribution of soils across the landscape, and characteristics of restricting layers. The objective of this research is to provide an initial look at integrating distributed predictive capabilities that consider both Hortonian and Non-Hortonian solutions simultaneously within a single SWAT-VSA initialization. We compare results from all three conceptual watershed initializations against measured surface runoff and stream P loads and to highlight the model's ability to drive sub-field management of P. All three initializations predict discharge similarly well (daily Nash-Sutcliffe Efficiencies above 0.5), but the new conceptual SWAT-VSA initialization performed best in predicting P export from the watershed, while also identifying critical source areas - those areas generating large runoff and P losses at the sub field level. These results support the use of mixed Hortonian non-Hortonian SWAT-VSA initializations in predicting watershed-scale P losses and identifying critical source areas of P loss in landscapes

  9. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  10. Model for the prediction of subsurface strata movement due to underground mining

    Science.gov (United States)

    Cheng, Jianwei; Liu, Fangyuan; Li, Siyuan

    2017-12-01

    The problem of ground control stability due to large underground mining operations is often associated with large movements and deformations of strata. It is a complicated problem, and can induce severe safety or environmental hazards either at the surface or in strata. Hence, knowing the subsurface strata movement characteristics, and making any subsidence predictions in advance, are desirable for mining engineers to estimate any damage likely to affect the ground surface or subsurface strata. Based on previous research findings, this paper broadly applies a surface subsidence prediction model based on the influence function method to subsurface strata, in order to predict subsurface stratum movement. A step-wise prediction model is proposed, to investigate the movement of underground strata. The model involves a dynamic iteration calculation process to derive the movements and deformations for each stratum layer; modifications to the influence method function are also made for more precise calculations. The critical subsidence parameters, incorporating stratum mechanical properties and the spatial relationship of interest at the mining level, are thoroughly considered, with the purpose of improving the reliability of input parameters. Such research efforts can be very helpful to mining engineers’ understanding of the moving behavior of all strata over underground excavations, and assist in making any damage mitigation plan. In order to check the reliability of the model, two methods are carried out and cross-validation applied. One is to use a borehole TV monitor recording to identify the progress of subsurface stratum bedding and caving in a coal mine, the other is to conduct physical modelling of the subsidence in underground strata. The results of these two methods are used to compare with theoretical results calculated by the proposed mathematical model. The testing results agree well with each other, and the acceptable accuracy and reliability of the

  11. New models of droplet deposition and entrainment for prediction of CHF in cylindrical rod bundles

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Haibin, E-mail: hb-zhang@xjtu.edu.cn [School of Chemical Engineering and Technology, Xi’an Jiaotong University, Xi’an 710049 (China); Department of Chemical Engineering, Imperial College, London SW7 2BY (United Kingdom); Hewitt, G.F. [Department of Chemical Engineering, Imperial College, London SW7 2BY (United Kingdom)

    2016-08-15

    Highlights: • New models of droplet deposition and entrainment in rod bundles is developed. • A new phenomenological model to predict the CHF in rod bundles is described. • The present model is well able to predict CHF in rod bundles. - Abstract: In this paper, we present a new set of model of droplet deposition and entrainment in cylindrical rod bundles based on the previously proposed model for annuli (effectively a “one-rod” bundle) (2016a). These models make it possible to evaluate the differences of the rates of droplet deposition and entrainment for the respective rods and for the outer tube by taking into account the geometrical characteristics of the rod bundles. Using these models, a phenomenological model to predict the CHF (critical heat flux) for upward annular flow in vertical rod bundles is described. The performance of the model is tested against the experimental data of Becker et al. (1964) for CHF in 3-rod and 7-rod bundles. These data include tests in which only the rods were heated and data for simultaneous uniform and non-uniform heating of the rods and the outer tube. It was shown that the predicted CHFs by the present model agree well with the experimental data and with the experimental observation that dryout occurred first on the outer rods in 7-rod bundles. It is expected that the methodology used will be generally applicable in the prediction of CHF in rod bundles.

  12. Critical point predication device

    International Nuclear Information System (INIS)

    Matsumura, Kazuhiko; Kariyama, Koji.

    1996-01-01

    An operation for predicting a critical point by using a existent reverse multiplication method has been complicated, and an effective multiplication factor could not be plotted directly to degrade the accuracy for the prediction. The present invention comprises a detector counting memory section for memorizing the counting sent from a power detector which monitors the reactor power, a reverse multiplication factor calculation section for calculating the reverse multiplication factor based on initial countings and current countings of the power detector, and a critical point prediction section for predicting the criticality by the reverse multiplication method relative to effective multiplication factors corresponding to the state of the reactor core previously determined depending on the cases. In addition, a reactor core characteristic calculation section is added for analyzing an effective multiplication factor depending on the state of the reactor core. Then, if the margin up to the criticality is reduced to lower than a predetermined value during critical operation, an alarm is generated to stop the critical operation when generation of a period of more than a predetermined value predicted by succeeding critical operation. With such procedures, forecasting for the critical point can be easily predicted upon critical operation to greatly mitigate an operator's burden and improve handling for the operation. (N.H.)

  13. Extracting falsifiable predictions from sloppy models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  14. Chiral model predictions for electromagnetic polarizabilities of the nucleon: A 'consumer report'

    International Nuclear Information System (INIS)

    Broniowski, W.

    1992-01-01

    This contribution has two parts: (1) The author critically discusses predictions for the electromagnetic polarizabilities of the nucleon obtained in two different approaches: (a) hedgehog models (HM), such as Skyrmions, chiral quark models, hybrid bags, NJL etc., and (b) chiral perturbation theory (χPT). (2) The author shows new results obtained in HM: N c -counting of polarizabilities, splitting of the neutron and proton polarizabilities (he argues that α n > α p in models with pionic clouds), relevance of dispersive terms in the magnetic polarizability β, important role of the Δ resonance in pionic loops, and the effects of non-minimal substitution terms in the effective lagrangian. 3 refs

  15. Role of criticality models in ANSI standards for nuclear criticality safety

    International Nuclear Information System (INIS)

    Thomas, J.T.

    1976-01-01

    Two methods used in nuclear criticality safety evaluations in the area of neutron interaction among subcritical components of fissile materials are the solid angle and surface density techniques. The accuracy and use of these models are briefly discussed

  16. A new Predictive Model for Relativistic Electrons in Outer Radiation Belt

    Science.gov (United States)

    Chen, Y.

    2017-12-01

    Relativistic electrons trapped in the Earth's outer radiation belt present a highly hazardous radiation environment for spaceborne electronics. These energetic electrons, with kinetic energies up to several megaelectron-volt (MeV), manifest a highly dynamic and event-specific nature due to the delicate interplay of competing transport, acceleration and loss processes. Therefore, developing a forecasting capability for outer belt MeV electrons has long been a critical and challenging task for the space weather community. Recently, the vital roles of electron resonance with waves (including such as chorus and electromagnetic ion cyclotron) have been widely recognized; however, it is still difficult for current diffusion radiation belt models to reproduce the behavior of MeV electrons during individual geomagnetic storms, mainly because of the large uncertainties existing in input parameters. In this work, we expanded our previous cross-energy cross-pitch-angle coherence study and developed a new predictive model for MeV electrons over a wide range of L-shells inside the outer radiation belt. This new model uses NOAA POES observations from low-Earth-orbits (LEOs) as inputs to provide high-fidelity nowcast (multiple hour prediction) and forecast (> 1 day prediction) of the energization of MeV electrons as well as the evolving MeV electron distributions afterwards during storms. Performance of the predictive model is quantified by long-term in situ data from Van Allen Probes and LANL GEO satellites. This study adds new science significance to an existing LEO space infrastructure, and provides reliable and powerful tools to the whole space community.

  17. Validation of JENDL-3.3 for the HTTR criticality

    International Nuclear Information System (INIS)

    Goto, Minoru; Nojiri, Naoki; Shimakawa, Satoshi

    2004-01-01

    Validation of JENDL-3.3 has been performed for the HTTR criticality using the MVP code with a ''lattice-cell'' of infinite models and a ''whole-core'' of finite models. It was found that the keff values calculated with JENDL-3.3 was decreased about 0.2-0.4%Δk from one with JENDL-3.2. The criticality prediction was closed to the experimental data in the critical approach situation of the HTTR. (author)

  18. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  19. A reference model for model-based design of critical infrastructure protection systems

    Science.gov (United States)

    Shin, Young Don; Park, Cheol Young; Lee, Jae-Chon

    2015-05-01

    Today's war field environment is getting versatile as the activities of unconventional wars such as terrorist attacks and cyber-attacks have noticeably increased lately. The damage caused by such unconventional wars has also turned out to be serious particularly if targets are critical infrastructures that are constructed in support of banking and finance, transportation, power, information and communication, government, and so on. The critical infrastructures are usually interconnected to each other and thus are very vulnerable to attack. As such, to ensure the security of critical infrastructures is very important and thus the concept of critical infrastructure protection (CIP) has come. The program to realize the CIP at national level becomes the form of statute in each country. On the other hand, it is also needed to protect each individual critical infrastructure. The objective of this paper is to study on an effort to do so, which can be called the CIP system (CIPS). There could be a variety of ways to design CIPS's. Instead of considering the design of each individual CIPS, a reference model-based approach is taken in this paper. The reference model represents the design of all the CIPS's that have many design elements in common. In addition, the development of the reference model is also carried out using a variety of model diagrams. The modeling language used therein is the systems modeling language (SysML), which was developed and is managed by Object Management Group (OMG) and a de facto standard. Using SysML, the structure and operational concept of the reference model are designed to fulfil the goal of CIPS's, resulting in the block definition and activity diagrams. As a case study, the operational scenario of the nuclear power plant while being attacked by terrorists is studied using the reference model. The effectiveness of the results is also analyzed using multiple analysis models. It is thus expected that the approach taken here has some merits

  20. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  1. Critical excitation spectrum of a quantum chain with a local three-spin coupling.

    Science.gov (United States)

    McCabe, John F; Wydro, Tomasz

    2011-09-01

    Using the phenomenological renormalization group (PRG), we evaluate the low-energy excitation spectrum along the critical line of a quantum spin chain having a local interaction between three Ising spins and longitudinal and transverse magnetic fields, i.e., a Turban model. The low-energy excitation spectrum found with the PRG agrees with the spectrum predicted for the (D(4),A(4)) conformal minimal model under a nontrivial correspondence between translations at the critical line and discrete lattice translations. Under this correspondence, the measurements confirm a prediction that the critical line of this quantum spin chain and the critical point of the two-dimensional three-state Potts model are in the same universality class.

  2. Critical excitation spectrum of a quantum chain with a local three-spin coupling

    International Nuclear Information System (INIS)

    McCabe, John F.; Wydro, Tomasz

    2011-01-01

    Using the phenomenological renormalization group (PRG), we evaluate the low-energy excitation spectrum along the critical line of a quantum spin chain having a local interaction between three Ising spins and longitudinal and transverse magnetic fields, i.e., a Turban model. The low-energy excitation spectrum found with the PRG agrees with the spectrum predicted for the (D 4 ,A 4 ) conformal minimal model under a nontrivial correspondence between translations at the critical line and discrete lattice translations. Under this correspondence, the measurements confirm a prediction that the critical line of this quantum spin chain and the critical point of the two-dimensional three-state Potts model are in the same universality class.

  3. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  4. Genomic prediction using subsampling

    OpenAIRE

    Xavier, Alencar; Xu, Shizhong; Muir, William; Rainey, Katy Martin

    2017-01-01

    Background Genome-wide assisted selection is a critical tool for the?genetic improvement of plants and animals. Whole-genome regression models in Bayesian framework represent the main family of prediction methods. Fitting such models with a large number of observations involves a prohibitive computational burden. We propose the use of subsampling bootstrap Markov chain in genomic prediction. Such method consists of fitting whole-genome regression models by subsampling observations in each rou...

  5. Critical behavior of mean-field hadronic models for warm nuclear matter

    International Nuclear Information System (INIS)

    Silva, J.B.; Lourenco, O.; Delfino, A.; Martins, J.S. Sa; Dutra, M.

    2008-01-01

    We study a set of hadronic mean-field models in the liquid-gas phase transition regime and calculate their critical parameters. The discussion is unified by scaling the coexistence curves in terms of these critical parameters. We study the models close to spinodal points, where they also present critical behavior. Inspired by signals of criticality shown in fragmentation experiments, we analyze two different scenarios in which such behavior would be expected: (i) the stability limits of a metastable system with vanishing external pressure; and (ii) the critical point of a gas-liquid phase equilibrium system for which the Maxwell construction applies. Spinodal and coexistence curves show the regions in which model dependence arises. Unexpectedly, this model dependence does not manifest if one calculates the thermal incompressibility of the models

  6. Teaching For Art Criticism: Incorporating Feldman’s Critical Analysis Learning Model In Students’ Studio Practice

    Directory of Open Access Journals (Sweden)

    Maithreyi Subramaniam

    2016-01-01

    Full Text Available This study adopted 30 first year graphic design students’ artwork, with critical analysis using Feldman’s model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students’ performances in their critical ability. Pearson Correlation Coefficient was used to find out the correlation between students’ studio practice and art critical ability scores. The findings showed most students performed slightly better than average in the critical analyses and performed best in selecting analysis among the four dimensions assessed. In the context of the students’ studio practice and critical ability, findings showed there are some connections between the students’ art critical ability and studio practice.

  7. A porous flow model for the geometrical form of volcanoes - Critical comments

    Science.gov (United States)

    Wadge, G.; Francis, P.

    1982-01-01

    A critical evaluation is presented of the assumptions on which the mathematical model for the geometrical form of a volcano arising from the flow of magma in a porous medium of Lacey et al. (1981) is based. The lack of evidence for an equipotential surface or its equivalent in volcanoes prior to eruption is pointed out, and the preference of volcanic eruptions for low ground is attributed to the local stress field produced by topographic loading rather than a rising magma table. Other difficulties with the model involve the neglect of the surface flow of lava under gravity away from the vent, and the use of the Dupuit approximation for unconfined flow and the assumption of essentially horizontal magma flow. Comparisons of model predictions with the shapes of actual volcanoes reveal the model not to fit lava shield volcanoes, for which the cone represents the solidification of small lava flows, and to provide a poor fit to composite central volcanoes.

  8. A system identification approach for developing model predictive controllers of antibody quality attributes in cell culture processes.

    Science.gov (United States)

    Downey, Brandon; Schmitt, John; Beller, Justin; Russell, Brian; Quach, Anthony; Hermann, Elizabeth; Lyon, David; Breit, Jeffrey

    2017-11-01

    As the biopharmaceutical industry evolves to include more diverse protein formats and processes, more robust control of Critical Quality Attributes (CQAs) is needed to maintain processing flexibility without compromising quality. Active control of CQAs has been demonstrated using model predictive control techniques, which allow development of processes which are robust against disturbances associated with raw material variability and other potentially flexible operating conditions. Wide adoption of model predictive control in biopharmaceutical cell culture processes has been hampered, however, in part due to the large amount of data and expertise required to make a predictive model of controlled CQAs, a requirement for model predictive control. Here we developed a highly automated, perfusion apparatus to systematically and efficiently generate predictive models using application of system identification approaches. We successfully created a predictive model of %galactosylation using data obtained by manipulating galactose concentration in the perfusion apparatus in serialized step change experiments. We then demonstrated the use of the model in a model predictive controller in a simulated control scenario to successfully achieve a %galactosylation set point in a simulated fed-batch culture. The automated model identification approach demonstrated here can potentially be generalized to many CQAs, and could be a more efficient, faster, and highly automated alternative to batch experiments for developing predictive models in cell culture processes, and allow the wider adoption of model predictive control in biopharmaceutical processes. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 33:1647-1661, 2017. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers.

  9. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  10. Beyond Critical Exponents in Neuronal Avalanches

    Science.gov (United States)

    Friedman, Nir; Butler, Tom; Deville, Robert; Beggs, John; Dahmen, Karin

    2011-03-01

    Neurons form a complex network in the brain, where they interact with one another by firing electrical signals. Neurons firing can trigger other neurons to fire, potentially causing avalanches of activity in the network. In many cases these avalanches have been found to be scale independent, similar to critical phenomena in diverse systems such as magnets and earthquakes. We discuss models for neuronal activity that allow for the extraction of testable, statistical predictions. We compare these models to experimental results, and go beyond critical exponents.

  11. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  12. A Fisher’s Criterion-Based Linear Discriminant Analysis for Predicting the Critical Values of Coal and Gas Outbursts Using the Initial Gas Flow in a Borehole

    Directory of Open Access Journals (Sweden)

    Xiaowei Li

    2017-01-01

    Full Text Available The risk of coal and gas outbursts can be predicted using a method that is linear and continuous and based on the initial gas flow in the borehole (IGFB; this method is significantly superior to the traditional point prediction method. Acquiring accurate critical values is the key to ensuring accurate predictions. Based on ideal rock cross-cut coal uncovering model, the IGFB measurement device was developed. The present study measured the data of the initial gas flow over 3 min in a 1 m long borehole with a diameter of 42 mm in the laboratory. A total of 48 sets of data were obtained. These data were fuzzy and chaotic. Fisher’s discrimination method was able to transform these spatial data, which were multidimensional due to the factors influencing the IGFB, into a one-dimensional function and determine its critical value. Then, by processing the data into a normal distribution, the critical values of the outbursts were analyzed using linear discriminant analysis with Fisher’s criterion. The weak and strong outbursts had critical values of 36.63 L and 80.85 L, respectively, and the accuracy of the back-discriminant analysis for the weak and strong outbursts was 94.74% and 92.86%, respectively. Eight outburst tests were simulated in the laboratory, the reverse verification accuracy was 100%, and the accuracy of the critical value was verified.

  13. Predictive user modeling with actionable attributes

    NARCIS (Netherlands)

    Zliobaite, I.; Pechenizkiy, M.

    2013-01-01

    Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target

  14. Modelling critical degrees of saturation of porous building materials subjected to freezing

    DEFF Research Database (Denmark)

    Hansen, Ernst Jan De Place

    1996-01-01

    of SCR based on fracture mechanics and phase geometry of two-phase materials has been developed.The degradation is modelled as being caused by different eigenstrains of the pore phase and the solid phase when freezing, leading to stress concentrations and crack propagation. Simplifications are made......Frost resistance of porous materials can be characterized by the critical degree of saturation, SCR, and the actual degree of saturation, SACT. An experimental determination of SCR is very laborious and therefore only seldom used when testing frost resistance. A theoretical model for prediction...... to describe the development of stresses and the pore structure, because a mathematical description of the physical theories explaining the process of freezing of water in porous materials is lacking.Calculations are based on porosity, modulus of elasticity and tensile strength, and parameters characterizing...

  15. Quantum critical scaling of fidelity in BCS-like model

    International Nuclear Information System (INIS)

    Adamski, Mariusz; Jedrzejewski, Janusz; Krokhmalskii, Taras

    2013-01-01

    We study scaling of the ground-state fidelity in neighborhoods of quantum critical points in a model of interacting spinful fermions—a BCS-like model. Due to the exact diagonalizability of the model, in one and higher dimensions, scaling of the ground-state fidelity can be analyzed numerically with great accuracy, not only for small systems but also for macroscopic ones, together with the crossover region between them. Additionally, in the one-dimensional case we have been able to derive a number of analytical formulas for fidelity and show that they accurately fit our numerical results; these results are reported in the paper. Besides regular critical points and their neighborhoods, where well-known scaling laws are obeyed, there is the multicritical point and critical points in its proximity where anomalous scaling behavior is found. We also consider scaling of fidelity in neighborhoods of critical points where fidelity oscillates strongly as the system size or the chemical potential is varied. Our results for a one-dimensional version of a BCS-like model are compared with those obtained recently by Rams and Damski in similar studies of a quantum spin chain—an anisotropic XY model in a transverse magnetic field. (paper)

  16. Predictive Modeling of Spinner Dolphin (Stenella longirostris) Resting Habitat in the Main Hawaiian Islands

    Science.gov (United States)

    Thorne, Lesley H.; Johnston, David W.; Urban, Dean L.; Tyne, Julian; Bejder, Lars; Baird, Robin W.; Yin, Suzanne; Rickards, Susan H.; Deakos, Mark H.; Mobley, Joseph R.; Pack, Adam A.; Chapla Hill, Marie

    2012-01-01

    Predictive habitat models can provide critical information that is necessary in many conservation applications. Using Maximum Entropy modeling, we characterized habitat relationships and generated spatial predictions of spinner dolphin (Stenella longirostris) resting habitat in the main Hawaiian Islands. Spinner dolphins in Hawai'i exhibit predictable daily movements, using inshore bays as resting habitat during daylight hours and foraging in offshore waters at night. There are growing concerns regarding the effects of human activities on spinner dolphins resting in coastal areas. However, the environmental factors that define suitable resting habitat remain unclear and must be assessed and quantified in order to properly address interactions between humans and spinner dolphins. We used a series of dolphin sightings from recent surveys in the main Hawaiian Islands and a suite of environmental variables hypothesized as being important to resting habitat to model spinner dolphin resting habitat. The model performed well in predicting resting habitat and indicated that proximity to deep water foraging areas, depth, the proportion of bays with shallow depths, and rugosity were important predictors of spinner dolphin habitat. Predicted locations of suitable spinner dolphin resting habitat provided in this study indicate areas where future survey efforts should be focused and highlight potential areas of conflict with human activities. This study provides an example of a presence-only habitat model used to inform the management of a species for which patterns of habitat availability are poorly understood. PMID:22937022

  17. A self-organized criticality model for plasma transport

    International Nuclear Information System (INIS)

    Carreras, B.A.; Newman, D.; Lynch, V.E.

    1996-01-01

    Many models of natural phenomena manifest the basic hypothesis of self-organized criticality (SOC). The SOC concept brings together the self-similarity on space and time scales that is common to many of these phenomena. The application of the SOC modelling concept to the plasma dynamics near marginal stability opens new possibilities of understanding issues such as Bohm scaling, profile consistency, broad band fluctuation spectra with universal characteristics and fast time scales. A model realization of self-organized criticality for plasma transport in a magnetic confinement device is presented. The model is based on subcritical resistive pressure-gradient-driven turbulence. Three-dimensional nonlinear calculations based on this model show the existence of transport under subcritical conditions. This model that includes fluctuation dynamics leads to results very similar to the running sandpile paradigm

  18. Estimation of Critical Parameters in Concrete Production Using Multispectral Vision Technology

    DEFF Research Database (Denmark)

    Hansen, Michael Edberg; Ersbøll, Bjarne Kjær; Carstensen, Jens Michael

    2005-01-01

    We analyze multispectral reflectance images of concrete aggregate material, and design computational measures of the important and critical parameters used in concrete production. The features extracted from the images are exploited as explanatory variables in regression models and used to predict...... aggregate type, water content, and size distribution. We analyze and validate the methods on five representative aggregate types, commonly used in concrete production. Using cross validation, the generated models proves to have a high performance in predicting all of the critical parameters....

  19. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  20. Using Cutting-Edge Tree-Based Stochastic Models to Predict Credit Risk

    Directory of Open Access Journals (Sweden)

    Khaled Halteh

    2018-05-01

    Full Text Available Credit risk is a critical issue that affects banks and companies on a global scale. Possessing the ability to accurately predict the level of credit risk has the potential to help the lender and borrower. This is achieved by alleviating the number of loans provided to borrowers with poor financial health, thereby reducing the number of failed businesses, and, in effect, preventing economies from collapsing. This paper uses state-of-the-art stochastic models, namely: Decision trees, random forests, and stochastic gradient boosting to add to the current literature on credit-risk modelling. The Australian mining industry has been selected to test our methodology. Mining in Australia generates around $138 billion annually, making up more than half of the total goods and services. This paper uses publicly-available financial data from 750 risky and not risky Australian mining companies as variables in our models. Our results indicate that stochastic gradient boosting was the superior model at correctly classifying the good and bad credit-rated companies within the mining sector. Our model showed that ‘Property, Plant, & Equipment (PPE turnover’, ‘Invested Capital Turnover’, and ‘Price over Earnings Ratio (PER’ were the variables with the best explanatory power pertaining to predicting credit risk in the Australian mining sector.

  1. Quantum-critical scaling of fidelity in 2D pairing models

    Energy Technology Data Exchange (ETDEWEB)

    Adamski, Mariusz, E-mail: mariusz.adamski@ift.uni.wroc.pl [Institute of Theoretical Physics, University of Wrocław, pl. Maksa Borna 9, 50–204, Wrocław (Poland); Jȩdrzejewski, Janusz [Institute of Theoretical Physics, University of Wrocław, pl. Maksa Borna 9, 50–204, Wrocław (Poland); Krokhmalskii, Taras [Institute for Condensed Matter Physics, 1 Svientsitski Street, 79011, Lviv (Ukraine)

    2017-01-15

    The laws of quantum-critical scaling theory of quantum fidelity, dependent on the underlying system dimensionality D, have so far been verified in exactly solvable 1D models, belonging to or equivalent to interacting, quadratic (quasifree), spinless or spinfull, lattice-fermion models. The obtained results are so appealing that in quest for correlation lengths and associated universal critical indices ν, which characterize the divergence of correlation lengths on approaching critical points, one might be inclined to substitute the hard task of determining an asymptotic behavior at large distances of a two-point correlation function by an easier one, of determining the quantum-critical scaling of the quantum fidelity. However, the role of system's dimensionality has been left as an open problem. Our aim in this paper is to fill up this gap, at least partially, by verifying the laws of quantum-critical scaling theory of quantum fidelity in a 2D case. To this end, we study correlation functions and quantum fidelity of 2D exactly solvable models, which are interacting, quasifree, spinfull, lattice-fermion models. The considered 2D models exhibit new, as compared with 1D ones, features: at a given quantum-critical point there exists a multitude of correlation lengths and multiple universal critical indices ν, since these quantities depend on spatial directions, moreover, the indices ν may assume larger values. These facts follow from the obtained by us analytical asymptotic formulae for two-point correlation functions. In such new circumstances we discuss the behavior of quantum fidelity from the perspective of quantum-critical scaling theory. In particular, we are interested in finding out to what extent the quantum fidelity approach may be an alternative to the correlation-function approach in studies of quantum-critical points beyond 1D.

  2. Application of a loading dose of colistin methanesulfonate in critically ill patients: population pharmacokinetics, protein binding, and prediction of bacterial kill.

    Science.gov (United States)

    Mohamed, Ami F; Karaiskos, Ilias; Plachouras, Diamantis; Karvanen, Matti; Pontikis, Konstantinos; Jansson, Britt; Papadomichelakis, Evangelos; Antoniadou, Anastasia; Giamarellou, Helen; Armaganidis, Apostolos; Cars, Otto; Friberg, Lena E

    2012-08-01

    A previous pharmacokinetic study on dosing of colistin methanesulfonate (CMS) at 240 mg (3 million units [MU]) every 8 h indicated that colistin has a long half-life, resulting in insufficient concentrations for the first 12 to 48 h after initiation of treatment. A loading dose would therefore be beneficial. The aim of this study was to evaluate CMS and colistin pharmacokinetics following a 480-mg (6-MU) loading dose in critically ill patients and to explore the bacterial kill following the use of different dosing regimens obtained by predictions from a pharmacokinetic-pharmacodynamic model developed from an in vitro study on Pseudomonas aeruginosa. The unbound fractions of colistin A and colistin B were determined using equilibrium dialysis and considered in the predictions. Ten critically ill patients (6 males; mean age, 54 years; mean creatinine clearance, 82 ml/min) with infections caused by multidrug-resistant Gram-negative bacteria were enrolled in the study. The pharmacokinetic data collected after the first and eighth doses were analyzed simultaneously with the data from the previous study (total, 28 patients) in the NONMEM program. For CMS, a two-compartment model best described the pharmacokinetics, and the half-lives of the two phases were estimated to be 0.026 and 2.2 h, respectively. For colistin, a one-compartment model was sufficient and the estimated half-life was 18.5 h. The unbound fractions of colistin in the patients were 26 to 41% at clinical concentrations. Colistin A, but not colistin B, had a concentration-dependent binding. The predictions suggested that the time to 3-log-unit bacterial kill for a 480-mg loading dose was reduced to half of that for the dose of 240 mg.

  3. The critical thinking curriculum model

    Science.gov (United States)

    Robertson, William Haviland

    The Critical Thinking Curriculum Model (CTCM) utilizes a multidisciplinary approach that integrates effective learning and teaching practices with computer technology. The model is designed to be flexible within a curriculum, an example for teachers to follow, where they can plug in their own critical issue. This process engages students in collaborative research that can be shared in the classroom, across the country or around the globe. The CTCM features open-ended and collaborative activities that deal with current, real world issues which leaders are attempting to solve. As implemented in the Critical Issues Forum (CIF), an educational program administered by Los Alamos National Laboratory (LANL), the CTCM encompasses the political, social/cultural, economic, and scientific realms in the context of a current global issue. In this way, students realize the importance of their schooling by applying their efforts to an endeavor that ultimately will affect their future. This study measures student attitudes toward science and technology and the changes that result from immersion in the CTCM. It also assesses the differences in student learning in science content and problem solving for students involved in the CTCM. A sample of 24 students participated in classrooms at two separate high schools in New Mexico. The evaluation results were analyzed using SPSS in a MANOVA format in order to determine the significance of the between and within-subjects effects. A comparison ANOVA was done for each two-way MANOVA to see if the comparison groups were equal. Significant findings were validated using the Scheffe test in a Post Hoc analysis. Demographic information for the sample population was recorded and tracked, including self-assessments of computer use and availability. Overall, the results indicated that the CTCM did help to increase science content understanding and problem-solving skills for students, thereby positively effecting critical thinking. No matter if the

  4. The critical thickness of liners of Cu interconnects

    International Nuclear Information System (INIS)

    Jiang, Q; Zhang, S H; Li, J C

    2004-01-01

    A model for the size-dependence of activation energy is developed. With the model and Fick's second law, relationships among the liner thickness, the working life and the working temperature of a TaN liner for Cu interconnects are predicted. The predicted results of the TaN liner are in good agreement with the experimental results. Moreover, the critical thicknesses of liners of some elements are calculated

  5. The organization of irrational beliefs in posttraumatic stress symptomology: testing the predictions of REBT theory using structural equation modelling.

    Science.gov (United States)

    Hyland, Philip; Shevlin, Mark; Adamson, Gary; Boduszek, Daniel

    2014-01-01

    This study directly tests a central prediction of rational emotive behaviour therapy (REBT) that has received little empirical attention regarding the core and intermediate beliefs in the development of posttraumatic stress symptoms. A theoretically consistent REBT model of posttraumatic stress disorder (PTSD) was examined using structural equation modelling techniques among a sample of 313 trauma-exposed military and law enforcement personnel. The REBT model of PTSD provided a good fit of the data, χ(2) = 599.173, df = 356, p depreciation beliefs. Results were consistent with the predictions of REBT theory and provides strong empirical support that the cognitive variables described by REBT theory are critical cognitive constructs in the prediction of PTSD symptomology. © 2013 Wiley Periodicals, Inc.

  6. Early lactate clearance for predicting active bleeding in critically ill patients with acute upper gastrointestinal bleeding: a retrospective study.

    Science.gov (United States)

    Wada, Tomoki; Hagiwara, Akiyoshi; Uemura, Tatsuki; Yahagi, Naoki; Kimura, Akio

    2016-08-01

    Not all patients with upper gastrointestinal bleeding (UGIB) require emergency endoscopy. Lactate clearance has been suggested as a parameter for predicting patient outcomes in various critical care settings. This study investigates whether lactate clearance can predict active bleeding in critically ill patients with UGIB. This single-center, retrospective, observational study included critically ill patients with UGIB who met all of the following criteria: admission to the emergency department (ED) from April 2011 to August 2014; had blood samples for lactate evaluation at least twice during the ED stay; and had emergency endoscopy within 6 h of ED presentation. The main outcome was active bleeding detected with emergency endoscopy. Classification and regression tree (CART) analyses were performed using variables associated with active bleeding to derive a prediction rule for active bleeding in critically ill UGIB patients. A total of 154 patients with UGIB were analyzed, and 31.2 % (48/154) had active bleeding. In the univariate analysis, lactate clearance was significantly lower in patients with active bleeding than in those without active bleeding (13 vs. 29 %, P bleeding is derived, and includes three variables: lactate clearance; platelet count; and systolic blood pressure at ED presentation. The rule has 97.9 % (95 % CI 90.2-99.6 %) sensitivity with 32.1 % (28.6-32.9 %) specificity. Lactate clearance may be associated with active bleeding in critically ill patients with UGIB, and may be clinically useful as a component of a prediction rule for active bleeding.

  7. Critical Review of Membrane Bioreactor Models

    DEFF Research Database (Denmark)

    Naessens, W.; Maere, T.; Ratkovich, Nicolas Rios

    2012-01-01

    Membrane bioreactor technology exists for a couple of decades, but has not yet overwhelmed the market due to some serious drawbacks of which operational cost due to fouling is the major contributor. Knowledge buildup and optimisation for such complex systems can heavily benefit from mathematical...... modelling. In this paper, the vast literature on hydrodynamic and integrated modelling in MBR is critically reviewed. Hydrodynamic models are used at different scales and focus mainly on fouling and only little on system design/optimisation. Integrated models also focus on fouling although the ones...

  8. A Critical Analysis and Validation of the Accuracy of Wave Overtopping Prediction Formulae for OWECs

    Directory of Open Access Journals (Sweden)

    David Gallach-Sánchez

    2018-01-01

    Full Text Available The development of wave energy devices is growing in recent years. One type of device is the overtopping wave energy converter (OWEC, for which the knowledge of the wave overtopping rates is a basic and crucial aspect in their design. In particular, the most interesting range to study is for OWECs with steep slopes to vertical walls, and with very small freeboards and zero freeboards where the overtopping rate is maximized, and which can be generalized as steep low-crested structures. Recently, wave overtopping prediction formulae have been published for this type of structures, although their accuracy has not been fully assessed, as the overtopping data available in this range is scarce. We performed a critical analysis of the overtopping prediction formulae for steep low-crested structures and the validation of the accuracy of these formulae, based on new overtopping data for steep low-crested structures obtained at Ghent University. This paper summarizes the existing knowledge about average wave overtopping, describes the physical model tests performed, analyses the results and compares them to existing prediction formulae. The new dataset extends the wave overtopping data towards vertical walls and zero freeboard structures. In general, the new dataset validated the more recent overtopping formulae focused on steep slopes with small freeboards, although the formulae are underpredicting the average overtopping rates for very small and zero relative crest freeboards.

  9. Comparing artificial neural networks, general linear models and support vector machines in building predictive models for small interfering RNAs.

    Directory of Open Access Journals (Sweden)

    Kyle A McQuisten

    2009-10-01

    Full Text Available Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models.Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation.The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features found to result in maximally predictive models are

  10. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    Science.gov (United States)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  11. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  12. Critical assessment of nuclear mass models

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.

    1992-01-01

    Some of the physical assumptions underlying various nuclear mass models are discussed. The ability of different mass models to predict new masses that were not taken into account when the models were formulated and their parameters determined is analyzed. The models are also compared with respect to their ability to describe nuclear-structure properties in general. The analysis suggests future directions for mass-model development

  13. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  14. Evaluation of a numerical model's ability to predict bed load transport observed in braided river experiments

    Science.gov (United States)

    Javernick, Luke; Redolfi, Marco; Bertoldi, Walter

    2018-05-01

    New data collection techniques offer numerical modelers the ability to gather and utilize high quality data sets with high spatial and temporal resolution. Such data sets are currently needed for calibration, verification, and to fuel future model development, particularly morphological simulations. This study explores the use of high quality spatial and temporal data sets of observed bed load transport in braided river flume experiments to evaluate the ability of a two-dimensional model, Delft3D, to predict bed load transport. This study uses a fixed bed model configuration and examines the model's shear stress calculations, which are the foundation to predict the sediment fluxes necessary for morphological simulations. The evaluation is conducted for three flow rates, and model setup used highly accurate Structure-from-Motion (SfM) topography and discharge boundary conditions. The model was hydraulically calibrated using bed roughness, and performance was evaluated based on depth and inundation agreement. Model bed load performance was evaluated in terms of critical shear stress exceedance area compared to maps of observed bed mobility in a flume. Following the standard hydraulic calibration, bed load performance was tested for sensitivity to horizontal eddy viscosity parameterization and bed morphology updating. Simulations produced depth errors equal to the SfM inherent errors, inundation agreement of 77-85%, and critical shear stress exceedance in agreement with 49-68% of the observed active area. This study provides insight into the ability of physically based, two-dimensional simulations to accurately predict bed load as well as the effects of horizontal eddy viscosity and bed updating. Further, this study highlights how using high spatial and temporal data to capture the physical processes at work during flume experiments can help to improve morphological modeling.

  15. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  16. Evaluation of the Food and Agriculture Sector Criticality Assessment Tool (FASCAT) and the Collected Data.

    Science.gov (United States)

    Huff, Andrew G; Hodges, James S; Kennedy, Shaun P; Kircher, Amy

    2015-08-01

    To protect and secure food resources for the United States, it is crucial to have a method to compare food systems' criticality. In 2007, the U.S. government funded development of the Food and Agriculture Sector Criticality Assessment Tool (FASCAT) to determine which food and agriculture systems were most critical to the nation. FASCAT was developed in a collaborative process involving government officials and food industry subject matter experts (SMEs). After development, data were collected using FASCAT to quantify threats, vulnerabilities, consequences, and the impacts on the United States from failure of evaluated food and agriculture systems. To examine FASCAT's utility, linear regression models were used to determine: (1) which groups of questions posed in FASCAT were better predictors of cumulative criticality scores; (2) whether the items included in FASCAT's criticality method or the smaller subset of FASCAT items included in DHS's risk analysis method predicted similar criticality scores. Akaike's information criterion was used to determine which regression models best described criticality, and a mixed linear model was used to shrink estimates of criticality for individual food and agriculture systems. The results indicated that: (1) some of the questions used in FASCAT strongly predicted food or agriculture system criticality; (2) the FASCAT criticality formula was a stronger predictor of criticality compared to the DHS risk formula; (3) the cumulative criticality formula predicted criticality more strongly than weighted criticality formula; and (4) the mixed linear regression model did not change the rank-order of food and agriculture system criticality to a large degree. © 2015 Society for Risk Analysis.

  17. Modelling and prediction of crop losses from NOAA polar-orbiting operational satellites

    Directory of Open Access Journals (Sweden)

    Felix Kogan

    2016-05-01

    Full Text Available Weather-related crop losses have always been a concern for farmers, governments, traders, and policy-makers for the purpose of balanced food supply/demands, trade, and distribution of aid to the nations in need. Among weather disasters, drought plays a major role in large-scale crop losses. This paper discusses utility of operational satellite-based vegetation health (VH indices for modelling cereal yield and for early warning of drought-related crop losses. The indices were tested in Saratov oblast (SO, one of the principal grain growing regions of Russia. Correlation and regression analysis were applied to model cereal yield from VH indices during 1982–2001. A strong correlation between mean SO's cereal yield and VH indices were found during the critical period of cereals, which starts two–three weeks before and ends two–three weeks after the heading stage. Several models were constructed where VH indices served as independent variables (predictors. The models were validated independently based on SO cereal yield during 1982–2012. Drought-related cereal yield losses can be predicted three months in advance of harvest and six–eight months in advance of official grain production statistic is released. The error of production losses prediction is 7%–10%. The error of prediction drops to 3%–5% in the years of intensive droughts.

  18. A review of model predictive control: moving from linear to nonlinear design methods

    International Nuclear Information System (INIS)

    Nandong, J.; Samyudia, Y.; Tade, M.O.

    2006-01-01

    Linear model predictive control (LMPC) has now been considered as an industrial control standard in process industry. Its extension to nonlinear cases however has not yet gained wide acceptance due to many reasons, e.g. excessively heavy computational load and effort, thus, preventing its practical implementation in real-time control. The application of nonlinear MPC (NMPC) is advantageous for processes with strong nonlinearity or when the operating points are frequently moved from one set point to another due to, for instance, changes in market demands. Much effort has been dedicated towards improving the computational efficiency of NMPC as well as its stability analysis. This paper provides a review on alternative ways of extending linear MPC to the nonlinear one. We also highlight the critical issues pertinent to the applications of NMPC and discuss possible solutions to address these issues. In addition, we outline the future research trend in the area of model predictive control by emphasizing on the potential applications of multi-scale process model within NMPC

  19. Predicting the chromatographic retention of polymers: application of the polymer model to poly(styrene/ethylacrylate)copolymers.

    Science.gov (United States)

    Bashir, Mubasher A; Radke, Wolfgang

    2012-02-17

    The retention behavior of a range of statistical poly(styrene/ethylacrylate) copolymers is investigated, in order to determine the possibility to predict retention volumes of these copolymers based on a suitable chromatographic retention model. It was found that the composition of elution in gradient chromatography of the copolymers is closely related to the eluent composition at which, in isocratic chromatography, the transition from elution in adsorption to exclusion mode occurs. For homopolymers this transition takes place at a critical eluent composition at which the molar mass dependence of elution volume vanishes. Thus, similar critical eluent compositions can be defined for statistical copolymers. The existence of a critical eluent composition is further supported by the narrower peak width, indicating that the broad molar mass distribution of the samples does not contribute to the retention volume. It is shown that the existing retention model for homopolymers allows for correct quantitative predictions of retention volumes based on only three appropriate initial experiments. The selection of these initial experiments involves a gradient run and two isocratic experiments, one at the composition of elution calculated from first gradient run and second at a slightly higher eluent strength. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Prediction of rodent carcinogenic potential of naturally occurring chemicals in the human diet using high-throughput QSAR predictive modeling

    International Nuclear Information System (INIS)

    Valerio, Luis G.; Arvidson, Kirk B.; Chanderbhan, Ronald F.; Contrera, Joseph F.

    2007-01-01

    Consistent with the U.S. Food and Drug Administration (FDA) Critical Path Initiative, predictive toxicology software programs employing quantitative structure-activity relationship (QSAR) models are currently under evaluation for regulatory risk assessment and scientific decision support for highly sensitive endpoints such as carcinogenicity, mutagenicity and reproductive toxicity. At the FDA's Center for Food Safety and Applied Nutrition's Office of Food Additive Safety and the Center for Drug Evaluation and Research's Informatics and Computational Safety Analysis Staff (ICSAS), the use of computational SAR tools for both qualitative and quantitative risk assessment applications are being developed and evaluated. One tool of current interest is MDL-QSAR predictive discriminant analysis modeling of rodent carcinogenicity, which has been previously evaluated for pharmaceutical applications by the FDA ICSAS. The study described in this paper aims to evaluate the utility of this software to estimate the carcinogenic potential of small, organic, naturally occurring chemicals found in the human diet. In addition, a group of 19 known synthetic dietary constituents that were positive in rodent carcinogenicity studies served as a control group. In the test group of naturally occurring chemicals, 101 were found to be suitable for predictive modeling using this software's discriminant analysis modeling approach. Predictions performed on these compounds were compared to published experimental evidence of each compound's carcinogenic potential. Experimental evidence included relevant toxicological studies such as rodent cancer bioassays, rodent anti-carcinogenicity studies, genotoxic studies, and the presence of chemical structural alerts. Statistical indices of predictive performance were calculated to assess the utility of the predictive modeling method. Results revealed good predictive performance using this software's rodent carcinogenicity module of over 1200 chemicals

  1. A study of critical two-phase flow models

    International Nuclear Information System (INIS)

    Siikonen, T.

    1982-01-01

    The existing computer codes use different boundary conditions in the calculation of critical two-phase flow. In the present study these boundary conditions are compared. It is shown that the boundary condition should be determined from the hydraulic model used in the computer code. The use of a correlation, which is not based on the hydraulic model used, leads often to bad results. Usually a good agreement with data is obtained in the calculation as far as the critical mass flux is concerned, but the agreement is not so good in the pressure profiles. The reason is suggested to be mainly in inadequate modeling of non-equilibrium effects. (orig.)

  2. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  3. The assessment of two-fluid models using critical flow data

    International Nuclear Information System (INIS)

    Shome, B.; Lahey, R.T. Jr.

    1992-01-01

    The behavior of two-phase flow is governed by the thermal-hydraulic transfers occurring across phasic interfaces. If correctly formulated, two-fluid models should yield all conceivable evolutions. Moreover, some experiments may be uniquely qualified for model assessment if they can isolate important closure models. This paper is primarily concerned with the possible assessment of the virtual mass force using air-water critical flow data, in which phase-change effects do not take place. The following conclusions can be drawn from this study: (1) The closure parameters, other than those for cirtual mass, were found to have an insignificant effect on critical flow. In contrast, the void fraction profile and the slip ratio were observed to be sensitive to the virtual mass model. (2) It appears that air-water critical flow experiments may be effectively used for the assessment of the virtual mass force used in two-fluid models. In fact, such experiments are unique in their ability to isolate the spatial gradients in a vm models. It is hoped that this study will help stimulate the conduct of further critical flow experiments for the assessment of two fluid models

  4. Tuning critical failure with viscoelasticity: How aftershocks inhibit criticality in an analytical mean field model of fracture.

    Science.gov (United States)

    Baro Urbea, J.; Davidsen, J.

    2017-12-01

    The hypothesis of critical failure relates the presence of an ultimate stability point in the structural constitutive equation of materials to a divergence of characteristic scales in the microscopic dynamics responsible of deformation. Avalanche models involving critical failure have determined universality classes in different systems: from slip events in crystalline and amorphous materials to the jamming of granular media or the fracture of brittle materials. However, not all empirical failure processes exhibit the trademarks of critical failure. As an example, the statistical properties of ultrasonic acoustic events recorded during the failure of porous brittle materials are stationary, except for variations in the activity rate that can be interpreted in terms of aftershock and foreshock activity (J. Baró et al., PRL 2013).The rheological properties of materials introduce dissipation, usually reproduced in atomistic models as a hardening of the coarse-grained elements of the system. If the hardening is associated to a relaxation process the same mechanism is able to generate temporal correlations. We report the analytic solution of a mean field fracture model exemplifying how criticality and temporal correlations are tuned by transient hardening. We provide a physical meaning to the conceptual model by deriving the constitutive equation from the explicit representation of the transient hardening in terms of a generalized viscoelasticity model. The rate of 'aftershocks' is controlled by the temporal evolution of the viscoelastic creep. At the quasistatic limit, the moment release is invariant to rheology. Therefore, the lack of criticality is explained by the increase of the activity rate close to failure, i.e. 'foreshocks'. Finally, the avalanche propagation can be reinterpreted as a pure mathematical problem in terms of a stochastic counting process. The statistical properties depend only on the distance to a critical point, which is universal for any

  5. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  6. Classification and prediction of the critical heat flux using fuzzy theory and artificial neural networks

    International Nuclear Information System (INIS)

    Moon, Sang Ki; Chang, Soon Heung

    1994-01-01

    A new method to predict the critical heat flux (CHF) is proposed, based on the fuzzy clustering and artificial neural network. The fuzzy clustering classifies the experimental CHF data into a few data clusters (data groups) according to the data characteristics. After classification of the experimental data, the characteristics of the resulting clusters are discussed with emphasis on the distribution of the experimental conditions and physical mechanism. The CHF data in each group are trained in an artificial neural network to predict the CHF. The artificial neural network adjusts the weight so as to minimize the prediction error within the corresponding cluster. Application of the proposed method to the KAIST CHF data bank shows good prediction capability of the CHF, better than other existing methods. ((orig.))

  7. A fast-running core prediction model based on neural networks for load-following operations in a soluble boron-free reactor

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Jin-wook [Korea Atomic Energy Research Institute, P.O. Box 105, Yusong, Daejon 305-600 (Korea, Republic of)], E-mail: Jinwook@kaeri.re.kr; Seong, Seung-Hwan [Korea Atomic Energy Research Institute, P.O. Box 105, Yusong, Daejon 305-600 (Korea, Republic of)], E-mail: shseong@kaeri.re.kr; Lee, Un-Chul [Department of Nuclear Engineering, Seoul National University, Shinlim-Dong, Gwanak-Gu, Seoul 151-742 (Korea, Republic of)

    2007-09-15

    A fast prediction model for load-following operations in a soluble boron-free reactor has been proposed, which can predict the core status when three or more control rod groups are moved at a time. This prediction model consists of two multilayer feedforward neural network models to retrieve the axial offset and the reactivity, and compensation models to compensate for the reactivity and axial offset arising from the xenon transient. The neural network training data were generated by taking various overlaps among the control rod groups into consideration for training the neural network models, and the accuracy of the constructed neural network models was verified. Validation results of predicting load following operations for a soluble boron-free reactor show that this model has a good capability to predict the positions of the control rods for sustaining the criticality of a core during load-following operations to ensure that the tolerable axial offset band is not exceeded and it can provide enough corresponding time for the operators to take the necessary actions to prevent a deviation from the tolerable operating band.

  8. A fast-running core prediction model based on neural networks for load-following operations in a soluble boron-free reactor

    International Nuclear Information System (INIS)

    Jang, Jin-wook; Seong, Seung-Hwan; Lee, Un-Chul

    2007-01-01

    A fast prediction model for load-following operations in a soluble boron-free reactor has been proposed, which can predict the core status when three or more control rod groups are moved at a time. This prediction model consists of two multilayer feedforward neural network models to retrieve the axial offset and the reactivity, and compensation models to compensate for the reactivity and axial offset arising from the xenon transient. The neural network training data were generated by taking various overlaps among the control rod groups into consideration for training the neural network models, and the accuracy of the constructed neural network models was verified. Validation results of predicting load following operations for a soluble boron-free reactor show that this model has a good capability to predict the positions of the control rods for sustaining the criticality of a core during load-following operations to ensure that the tolerable axial offset band is not exceeded and it can provide enough corresponding time for the operators to take the necessary actions to prevent a deviation from the tolerable operating band

  9. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  10. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  11. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  12. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  13. An application of liquid sublayer dryout mechanism to the prediction of critical heat flux under low pressure and low velocity conditions in round tubes

    International Nuclear Information System (INIS)

    Lee, Kwang-Won; Yang, Jae-Young; Baik, Se-Jin

    1997-01-01

    Based on several experimental evidences for nucleate boiling in annular film and the existence of residual liquid film flow rate at the critical heat flux (CHF) location, the liquid sublayer dryout (LSD) mechanism under annular film is firstly introduced to evaluate the CHF data at low pressure and low velocity (LPLV) conditions, which would not be predicted by a normal annular film dryout (AFD) model. In this study, the CHF occurrence due to annular film separation or breaking down is phenomenologically modelled by applying the LSD mechanism to this situation. In this LSD mechanism, the liquid sublayer thickness, the incoming liquid velocity to the liquid sublayer, and the axial distance from the onset of annular flow to the CHF location are used as the phenomena-controlling parameters. From the model validation on the 1406 CHF data points ranging over P = 0.1 - 2 MPa, G = 4 - 499 kg/m 2 s, L/D = 4 - 402, most of CHF data (more than 1000 points) are predicted within ±30% error bounds by the LSD mechanism. However, some calculation results that critical qualities are less than 0.4 are considerably overestimated by this mechanism. These overpredictions seem to be caused by inadequate CHF mechanism classification criteria and an insufficient consideration of the flow instability effect on CHF. Further studies for a new classification criterion screening the CHF data affected by flow instabilities and a new bubble detachment model for LPLV conditions are needed to improve the model accuracy. (author)

  14. Development of scheme for predicting atmospheric dispersion of radionuclides during nuclear emergency by using atmospheric dynamic model

    Energy Technology Data Exchange (ETDEWEB)

    Nagai, Haruyasu; Chino, Masamichi; Yamazawa, Hiromi (Japan Atomic Energy Research Inst., Tokyo (Japan))

    1999-07-01

    The meteorological forecast models are critically important for the accuracy of predicting the atmospheric dispersion of radionuclides discharged into atmosphere during nuclear emergencies. Thus, this paper describes a new scheme for predicting environmental impacts due to accidental release of radionuclides by using an atmospheric dynamic model PHYSIC. The advantages of introducing PHYSIC are, (1) three-dimensional local meteorological forecasts can be conducted, (2) synoptic meteorological changes can be considered by inputting grid data of synoptic forecasts from Japan Meteorological Agency to PHYSIC as initial and boundary conditions, (3) forecasts can be improved by nudging method using local meteorological observations, and (4) atmospheric dispersion model can consider the variation of the mixed layer. (author)

  15. Development of scheme for predicting atmospheric dispersion of radionuclides during nuclear emergency by using atmospheric dynamic model

    International Nuclear Information System (INIS)

    Nagai, Haruyasu; Chino, Masamichi; Yamazawa, Hiromi

    1999-01-01

    The meteorological forecast models are critically important for the accuracy of predicting the atmospheric dispersion of radionuclides discharged into atmosphere during nuclear emergencies. Thus, this paper describes a new scheme for predicting environmental impacts due to accidental release of radionuclides by using an atmospheric dynamic model PHYSIC. The advantages of introducing PHYSIC are, (1) three-dimensional local meteorological forecasts can be conducted, (2) synoptic meteorological changes can be considered by inputting grid data of synoptic forecasts from Japan Meteorological Agency to PHYSIC as initial and boundary conditions, (3) forecasts can be improved by nudging method using local meteorological observations, and (4) atmospheric dispersion model can consider the variation of the mixed layer. (author)

  16. Critical review of glass performance modeling

    International Nuclear Information System (INIS)

    Bourcier, W.L.

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process

  17. Critical review of glass performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process.

  18. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  19. Effect of turbulence models on predicting convective heat transfer to hydrocarbon fuel at supercritical pressure

    Directory of Open Access Journals (Sweden)

    Tao Zhi

    2016-10-01

    Full Text Available A variety of turbulence models were used to perform numerical simulations of heat transfer for hydrocarbon fuel flowing upward and downward through uniformly heated vertical pipes at supercritical pressure. Inlet temperatures varied from 373 K to 663 K, with heat flux ranging from 300 kW/m2 to 550 kW/m2. Comparative analyses between predicted and experimental results were used to evaluate the ability of turbulence models to respond to variable thermophysical properties of hydrocarbon fuel at supercritical pressure. It was found that the prediction performance of turbulence models is mainly determined by the damping function, which enables them to respond differently to local flow conditions. Although prediction accuracy for experimental results varied from condition to condition, the shear stress transport (SST and launder and sharma models performed better than all other models used in the study. For very small buoyancy-influenced runs, the thermal-induced acceleration due to variations in density lead to the impairment of heat transfer occurring in the vicinity of pseudo-critical points, and heat transfer was enhanced at higher temperatures through the combined action of four thermophysical properties: density, viscosity, thermal conductivity and specific heat. For very large buoyancy-influenced runs, the thermal-induced acceleration effect was over predicted by the LS and AB models.

  20. Critical modeling parameters identified for 3D CFD modeling of rectangular final settling tanks for New York City wastewater treatment plants.

    Science.gov (United States)

    Ramalingam, K; Xanthos, S; Gong, M; Fillos, J; Beckmann, K; Deur, A; McCorquodale, J A

    2012-01-01

    New York City Environmental Protection is in the process of incorporating biological nitrogen removal (BNR) in its wastewater treatment plants (WWTPs) which entails operating the aeration tanks with higher levels of mixed liquor suspended solids (MLSS) than a conventional activated sludge process. The objective of this paper is to discuss two of the important parameters introduced in the 3D CFD model that has been developed by the City College of New York (CCNY) group: (a) the development of the 'discrete particle' measurement technique to carry out the fractionation of the solids in the final settling tank (FST) which has critical implications in the prediction of the effluent quality; and (b) the modification of the floc aggregation (K(A)) and floc break-up (K(B)) coefficients that are found in Parker's flocculation equation (Parker et al. 1970, 1971) used in the CFD model. The dependence of these parameters on the predictions of the CFD model will be illustrated with simulation results on one of the FSTs at the 26th Ward WWTP in Brooklyn, NY.

  1. A naive Bayes model for robust remaining useful life prediction of lithium-ion battery

    International Nuclear Information System (INIS)

    Ng, Selina S.Y.; Xing, Yinjiao; Tsui, Kwok L.

    2014-01-01

    Highlights: • Robustness of RUL predictions for lithium-ion batteries is analyzed quantitatively. • RUL predictions of the same battery over cycle life are evaluated. • RUL predictions of batteries over different operating conditions are evaluated. • Naive Bayes (NB) is proposed for predictions under constant discharge environments. • Its robustness and accuracy are compared with that of support vector machine (SVM). - Abstract: Online state-of-health (SoH) estimation and remaining useful life (RUL) prediction is a critical problem in battery health management. This paper studies the modeling of battery degradation under different usage conditions and ambient temperatures, which is seldom considered in the literature. Li-ion battery RUL prediction under constant operating conditions at different values of ambient temperature and discharge current are considered. A naive Bayes (NB) model is proposed for RUL prediction of batteries under different operating conditions. It is shown in this analysis that under constant discharge environments, the RUL of Li-ion batteries can be predicted with the NB method, irrespective of the exact values of the operating conditions. The case study shows that the NB generates stable and competitive prediction performance over that of the support vector machine (SVM). This also suggests that, while it is well known that the environmental conditions have big impact on the degradation trend, it is the changes in operating conditions of a Li-ion battery over cycle life that makes the Li-ion battery degradation and RUL prediction even more difficult

  2. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  3. Modelling of 28-element UO2 flux-map critical experiments in ZED-2 using WIMS9A/PANTHER

    International Nuclear Information System (INIS)

    Sissaoui, M.T.; Kozier, K.S.; Labrie, J.P.

    2011-01-01

    The accuracy of WIMS9A/PANTHER in modelling D 2 O-moderated, and H 2 O- or air-cooled, doubly heterogeneous lattices of fuel clusters has been demonstrated using 28-element UO 2 flux-map critical experiments in the ZED-2 facility. Presented here are the predicted k eff values, coolant void reactivity biases, and the radial and axial flux shapes.

  4. A computation method for mass flowrate predictions in critical flows of initially subcooled liquid in long channels

    International Nuclear Information System (INIS)

    Celata, G.P.; D'Annibale, F.; Farello, G.E.

    1985-01-01

    It is suggested a fast and accurate computation method for the prediction of mass flowrate in critical flows initially subcooled liquid from ''long'' discharge channels (high LID values). Starting from a previous very simple correlation proposed by the authors, further improvements in the model enable to widen the method reliability up to initial saturation conditions. A comparison of computed values with 145 experimental data regarding several investigations carried out at the Heat Transfer Laboratory (TERM/ISP, ENEA Casaccia) shows an excellent agreement. The computed data shifting from experimental ones is within ±10% for almost all data, with a slight increase towards low inlet subcoolings. The average error, for all the considered data, is 4,6%

  5. Predictive models for fish assemblages in eastern USA streams: implications for assessing biodiversity

    Science.gov (United States)

    Meador, Michael R.; Carlisle, Daren M.

    2009-01-01

    Management and conservation of aquatic systems require the ability to assess biological conditions and identify changes in biodiversity. Predictive models for fish assemblages were constructed to assess biological condition and changes in biodiversity for streams sampled in the eastern United States as part of the U.S. Geological Survey's National Water Quality Assessment Program. Separate predictive models were developed for northern and southern regions. Reference sites were designated using land cover and local professional judgment. Taxonomic completeness was quantified based on the ratio of the number of observed native fish species expected to occur to the number of expected native fish species. Models for both regions accurately predicted fish species composition at reference sites with relatively high precision and low bias. In general, species that occurred less frequently than expected (decreasers) tended to prefer riffle areas and larger substrates, such as gravel and cobble, whereas increaser species (occurring more frequently than expected) tended to prefer pools, backwater areas, and vegetated and sand substrates. In the north, the percentage of species identified as increasers and the percentage identified as decreasers were equal, whereas in the south nearly two-thirds of the species examined were identified as decreasers. Predictive models of fish species can provide a standardized indicator for consistent assessments of biological condition at varying spatial scales and critical information for an improved understanding of fish species that are potentially at risk of loss with changing water quality conditions.

  6. Data-Reconciliation Based Fault-Tolerant Model Predictive Control for a Biomass Boiler

    Directory of Open Access Journals (Sweden)

    Palash Sarkar

    2017-02-01

    Full Text Available This paper presents a novel, effective method to handle critical sensor faults affecting a control system devised to operate a biomass boiler. In particular, the proposed method consists of integrating a data reconciliation algorithm in a model predictive control loop, so as to annihilate the effects of faults occurring in the sensor of the flue gas oxygen concentration, by feeding the controller with the reconciled measurements. Indeed, the oxygen content in flue gas is a key variable in control of biomass boilers due its close connections with both combustion efficiency and polluting emissions. The main benefit of including the data reconciliation algorithm in the loop, as a fault tolerant component, with respect to applying standard fault tolerant methods, is that controller reconfiguration is not required anymore, since the original controller operates on the restored, reliable data. The integrated data reconciliation–model predictive control (MPC strategy has been validated by running simulations on a specific type of biomass boiler—the KPA Unicon BioGrate boiler.

  7. Critical behavior of the anisotropic Heisenberg model by effective-field renormalization group

    Science.gov (United States)

    de Sousa, J. Ricardo; Fittipaldi, I. P.

    1994-05-01

    A real-space effective-field renormalization-group method (ERFG) recently derived for computing critical properties of Ising spins is extended to treat the quantum spin-1/2 anisotropic Heisenberg model. The formalism is based on a generalized but approximate Callen-Suzuki spin relation and utilizes a convenient differential operator expansion technique. The method is illustrated in several lattice structures by employing its simplest approximation version in which clusters with one (N'=1) and two (N=2) spins are used. The results are compared with those obtained from the standard mean-field (MFRG) and Migdal-Kadanoff (MKRG) renormalization-group treatments and it is shown that this technique leads to rather accurate results. It is shown that, in contrast with the MFRG and MKRG predictions, the EFRG, besides correctly distinguishing the geometries of different lattice structures, also provides a vanishing critical temperature for all two-dimensional lattices in the isotropic Heisenberg limit. For the simple cubic lattice, the dependence of the transition temperature Tc with the exchange anisotropy parameter Δ [i.e., Tc(Δ)], and the resulting value for the critical thermal crossover exponent φ [i.e., Tc≂Tc(0)+AΔ1/φ ] are in quite good agreement with results available in the literature in which more sophisticated treatments are used.

  8. Impact of modeling Choices on Inventory and In-Cask Criticality Calculations for Forsmark 3 BWR Spent Fuel

    International Nuclear Information System (INIS)

    Martinez-Gonzalez, Jesus S.; Ade, Brian J.; Bowman, Stephen M.; Gauld, Ian C.; Ilas, Germina; Marshall, William BJ J.

    2015-01-01

    Simulation of boiling water reactor (BWR) fuel depletion poses a challenge for nuclide inventory validation and nuclear criticality safety analyses. This challenge is due to the complex operating conditions and assembly design heterogeneities that characterize these nuclear systems. Fuel depletion simulations and in-cask criticality calculations are affected by (1) completeness of design information, (2) variability of operating conditions needed for modeling purposes, and (3) possible modeling choices. These effects must be identified, quantified, and ranked according to their significance. This paper presents an investigation of BWR fuel depletion using a complete set of actual design specifications and detailed operational data available for five operating cycles of the Swedish BWR Forsmark 3 reactor. The data includes detailed axial profiles of power, burnup, and void fraction in a very fine temporal mesh for a GE14 (10x10) fuel assembly. The specifications of this case can be used to assess the impacts of different modeling choices on inventory prediction and in-cask criticality, specifically regarding the key parameters that drive inventory and reactivity throughout fuel burnup. This study focused on the effects of the fidelity with which power history and void fraction distributions are modeled. The corresponding sensitivity of the reactivity in storage configurations is assessed, and the impacts of modeling choices on decay heat and inventory are addressed.

  9. Watershed-scale evaluation of the Water Erosion Prediction Project (WEPP) model in the Lake Tahoe basin

    Science.gov (United States)

    Erin S. Brooks; Mariana Dobre; William J. Elliot; Joan Q. Wu; Jan Boll

    2016-01-01

    Forest managers need methods to evaluate the impacts of management at the watershed scale. The Water Erosion Prediction Project (WEPP) has the ability to model disturbed forested hillslopes, but has difficulty addressing some of the critical processes that are important at a watershed scale, including baseflow and water yield. In order to apply WEPP to...

  10. Predictive modeling of nanoscale domain morphology in solution-processed organic thin films

    Science.gov (United States)

    Schaaf, Cyrus; Jenkins, Michael; Morehouse, Robell; Stanfield, Dane; McDowall, Stephen; Johnson, Brad L.; Patrick, David L.

    2017-09-01

    The electronic and optoelectronic properties of molecular semiconductor thin films are directly linked to their extrinsic nanoscale structural characteristics such as domain size and spatial distributions. In films prepared by common solution-phase deposition techniques such as spin casting and solvent-based printing, morphology is governed by a complex interrelated set of thermodynamic and kinetic factors that classical models fail to adequately capture, leaving them unable to provide much insight, let alone predictive design guidance for tailoring films with specific nanostructural characteristics. Here we introduce a comprehensive treatment of solution-based film formation enabling quantitative prediction of domain formation rates, coverage, and spacing statistics based on a small number of experimentally measureable parameters. The model combines a mean-field rate equation treatment of monomer aggregation kinetics with classical nucleation theory and a supersaturation-dependent critical nucleus size to solve for the quasi-two-dimensional temporally and spatially varying monomer concentration, nucleation rate, and other properties. Excellent agreement is observed with measured nucleation densities and interdomain radial distribution functions in polycrystalline tetracene films. Numerical solutions lead to a set of general design rules enabling predictive morphological control in solution-processed molecular crystalline films.

  11. Machine learning and linear regression models to predict catchment-level base cation weathering rates across the southern Appalachian Mountain region, USA

    Science.gov (United States)

    Nicholas A. Povak; Paul F. Hessburg; Todd C. McDonnell; Keith M. Reynolds; Timothy J. Sullivan; R. Brion Salter; Bernard J. Crosby

    2014-01-01

    Accurate estimates of soil mineral weathering are required for regional critical load (CL) modeling to identify ecosystems at risk of the deleterious effects from acidification. Within a correlative modeling framework, we used modeled catchment-level base cation weathering (BCw) as the response variable to identify key environmental correlates and predict a continuous...

  12. Efficient model learning methods for actor-critic control.

    Science.gov (United States)

    Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik

    2012-06-01

    We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.

  13. A critical review of cell culture strategies for modelling intracortical brain implant material reactions.

    Science.gov (United States)

    Gilmour, A D; Woolley, A J; Poole-Warren, L A; Thomson, C E; Green, R A

    2016-06-01

    The capacity to predict in vivo responses to medical devices in humans currently relies greatly on implantation in animal models. Researchers have been striving to develop in vitro techniques that can overcome the limitations associated with in vivo approaches. This review focuses on a critical analysis of the major in vitro strategies being utilized in laboratories around the world to improve understanding of the biological performance of intracortical, brain-implanted microdevices. Of particular interest to the current review are in vitro models for studying cell responses to penetrating intracortical devices and their materials, such as electrode arrays used for brain computer interface (BCI) and deep brain stimulation electrode probes implanted through the cortex. A background on the neural interface challenge is presented, followed by discussion of relevant in vitro culture strategies and their advantages and disadvantages. Future development of 2D culture models that exhibit developmental changes capable of mimicking normal, postnatal development will form the basis for more complex accurate predictive models in the future. Although not within the scope of this review, innovations in 3D scaffold technologies and microfluidic constructs will further improve the utility of in vitro approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  15. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Predictive time-series modeling using artificial neural networks for Linac beam symmetry: an empirical study.

    Science.gov (United States)

    Li, Qiongge; Chan, Maria F

    2017-01-01

    Over half of cancer patients receive radiotherapy (RT) as partial or full cancer treatment. Daily quality assurance (QA) of RT in cancer treatment closely monitors the performance of the medical linear accelerator (Linac) and is critical for continuous improvement of patient safety and quality of care. Cumulative longitudinal QA measurements are valuable for understanding the behavior of the Linac and allow physicists to identify trends in the output and take preventive actions. In this study, artificial neural networks (ANNs) and autoregressive moving average (ARMA) time-series prediction modeling techniques were both applied to 5-year daily Linac QA data. Verification tests and other evaluations were then performed for all models. Preliminary results showed that ANN time-series predictive modeling has more advantages over ARMA techniques for accurate and effective applicability in the dosimetry and QA field. © 2016 New York Academy of Sciences.

  17. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  18. Damage assessment of low-cycle fatigue by crack growth prediction. Development of growth prediction model and its application

    International Nuclear Information System (INIS)

    Kamaya, Masayuki; Kawakubo, Masahiro

    2012-01-01

    In this study, the fatigue damage was assumed to be equivalent to the crack initiation and its growth, and fatigue life was assessed by predicting the crack growth. First, a low-cycle fatigue test was conducted in air at room temperature under constant cyclic strain range of 1.2%. The crack initiation and change in crack size during the test were examined by replica investigation. It was found that a crack of 41.2 μm length was initiated almost at the beginning of the test. The identified crack growth rate was shown to correlate well with the strain intensity factor, whose physical meaning was discussed in this study. The fatigue life prediction model (equation) under constant strain range was derived by integrating the crack growth equation defined using the strain intensity factor, and the predicted fatigue lives were almost identical to those obtained by low-cycle fatigue tests. The change in crack depth predicted by the equation also agreed well with the experimental results. Based on the crack growth prediction model, it was shown that the crack size would be less than 0.1 mm even when the estimated fatigue damage exceeded the critical value of the design fatigue curve, in which a twenty-fold safety margin was used for the assessment. It was revealed that the effect of component size and surface roughness, which have been investigated empirically by fatigue tests, could be reasonably explained by considering the crack initiation and growth. Furthermore, the environmental effect on the fatigue life was shown to be brought about by the acceleration of crack growth. (author)

  19. An Adaptive Critic Approach to Reference Model Adaptation

    Science.gov (United States)

    Krishnakumar, K.; Limes, G.; Gundy-Burlet, K.; Bryant, D.

    2003-01-01

    Neural networks have been successfully used for implementing control architectures for different applications. In this work, we examine a neural network augmented adaptive critic as a Level 2 intelligent controller for a C- 17 aircraft. This intelligent control architecture utilizes an adaptive critic to tune the parameters of a reference model, which is then used to define the angular rate command for a Level 1 intelligent controller. The present architecture is implemented on a high-fidelity non-linear model of a C-17 aircraft. The goal of this research is to improve the performance of the C-17 under degraded conditions such as control failures and battle damage. Pilot ratings using a motion based simulation facility are included in this paper. The benefits of using an adaptive critic are documented using time response comparisons for severe damage situations.

  20. Safety-Critical Java for Embedded Systems

    DEFF Research Database (Denmark)

    Rios Rivas, Juan Ricardo

    for Java aims at providing a reduced set of the Java programming language that can be used for systems that need to be certified at the highest levels of criticality. Safety-critical Java (SCJ) restricts how a developer can structure an application by providing a specific programming model...... and by restricting the set of methods and libraries that can be used. Furthermore, its memory model do not use a garbage-collected heap but scoped memories. In this thesis we examine the use of the SCJ specification through an implementation in a time-predictable, FPGA-based Java processor. The specification is now...

  1. Predicting residential air exchange rates from questionnaires and meteorology: model evaluation in central North Carolina.

    Science.gov (United States)

    Breen, Michael S; Breen, Miyuki; Williams, Ronald W; Schultz, Bradley D

    2010-12-15

    A critical aspect of air pollution exposure models is the estimation of the air exchange rate (AER) of individual homes, where people spend most of their time. The AER, which is the airflow into and out of a building, is a primary mechanism for entry of outdoor air pollutants and removal of indoor source emissions. The mechanistic Lawrence Berkeley Laboratory (LBL) AER model was linked to a leakage area model to predict AER from questionnaires and meteorology. The LBL model was also extended to include natural ventilation (LBLX). Using literature-reported parameter values, AER predictions from LBL and LBLX models were compared to data from 642 daily AER measurements across 31 detached homes in central North Carolina, with corresponding questionnaires and meteorological observations. Data was collected on seven consecutive days during each of four consecutive seasons. For the individual model-predicted and measured AER, the median absolute difference was 43% (0.17 h(-1)) and 40% (0.17 h(-1)) for the LBL and LBLX models, respectively. Additionally, a literature-reported empirical scale factor (SF) AER model was evaluated, which showed a median absolute difference of 50% (0.25 h(-1)). The capability of the LBL, LBLX, and SF models could help reduce the AER uncertainty in air pollution exposure models used to develop exposure metrics for health studies.

  2. Assessment of a turbulence model for numerical predictions of sheet-cavitating flows in centrifugal pumps

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Houlin; Wang, Yong; Liu, Dongxi; Yuan, Shouqi; Wang, Jian [Jiangsu University, Zhenjiang (China)

    2013-09-15

    Various approaches have been developed for numerical predictions of unsteady cavitating turbulent flows. To verify the influence of a turbulence model on the simulation of unsteady attached sheet-cavitating flows in centrifugal pumps, two modified RNG k-ε models (DCM and FBM) are implemented in ANSYS-CFX 13.0 by second development technology, so as to compare three widespread turbulence models in the same platform. The simulation has been executed and compared to experimental results for three different flow coefficients. For four operating conditions, qualitative comparisons are carried out between experimental and numerical cavitation patterns, which are visualized by a high-speed camera and depicted as isosurfaces of vapor volume fraction α{sub v} = 0.1, respectively. The comparison results indicate that, for the development of the sheet attached cavities on the suction side of the impeller blades, the numerical results with different turbulence models are very close to each other and overestimate the experiment ones slightly. However, compared to the cavitation performance experimental curves, the numerical results have obvious difference: the prediction precision with the FBM is higher than the other two turbulence models. In addition, the loading distributions around the blade section at midspan are analyzed in detail. The research results suggest that, for numerical prediction of cavitating flows in centrifugal pumps, the turbulence model has little influence on the development of cavitation bubbles, but the advanced turbulence model can significantly improve the prediction precision of head coefficients and critical cavitation numbers.

  3. A formal approach for the prediction of the critical heat flux in subcooled water

    Energy Technology Data Exchange (ETDEWEB)

    Lombardi, C. [Polytechnic of Milan (Italy)

    1995-09-01

    The critical heat flux (CHF) in subcooled water at high mass fluxes are not yet satisfactory correlated. For this scope a formal approach is here followed, which is based on an extension of the parameters and the correlation used for the dryout prediction for medium high quality mixtures. The obtained correlation, in spite of its simplicity and its explicit form, yields satisfactory predictions, also when applied to more conventional CHF data at low-medium mass fluxes and high pressures. Further improvements are possible, if a more complete data bank will be available. The main and general open item is the definition of a criterion, depending only on independent parameters, such as mass flux, pressure, inlet subcooling and geometry, to predict whether the heat transfer crisis will result as a DNB or a dryout phenomenon.

  4. A formal approach for the prediction of the critical heat flux in subcooled water

    International Nuclear Information System (INIS)

    Lombardi, C.

    1995-01-01

    The critical heat flux (CHF) in subcooled water at high mass fluxes are not yet satisfactory correlated. For this scope a formal approach is here followed, which is based on an extension of the parameters and the correlation used for the dryout prediction for medium high quality mixtures. The obtained correlation, in spite of its simplicity and its explicit form, yields satisfactory predictions, also when applied to more conventional CHF data at low-medium mass fluxes and high pressures. Further improvements are possible, if a more complete data bank will be available. The main and general open item is the definition of a criterion, depending only on independent parameters, such as mass flux, pressure, inlet subcooling and geometry, to predict whether the heat transfer crisis will result as a DNB or a dryout phenomenon

  5. Development of a digital reactivity meter for criticality prediction and control rod worth evaluation in pressurized water reactors

    Energy Technology Data Exchange (ETDEWEB)

    Kuramoto, Renato Y.R.; Miranda, Anselmo F.; Valladares, Gastao Lommez; Prado, Adelk C. [Eletrobras Termonuclear S.A. - ELETRONUCLEAR, Angra dos Reis, RJ (Brazil). Central Nuclear Almirante Alvaro Alberto], e-mail: kuramot@eletronuclear.gov.br

    2009-07-01

    In this work, we have proposed the development of a digital reactivity meter in order to monitor subcriticality continuously during criticality approach in a PWR. A subcritical reactivity meter can provide an easy prediction of the estimated critical point prior to reactor criticality, without complicated hand calculation. Moreover, in order to reduce the interval of the Physics Tests from the economical point of view, a subcritical reactivity meter can evaluate the control rod worth from direct subcriticality measurement. In other words, count rate of Source Range (SR) detector recorded during the criticality approach could be used for subcriticality evaluation or control rod worth evaluation. Basically, a digital reactivity meter is based on the inverse solution of the kinetic equations of a reactor with the external neutron source in one-point reactor model. There are some difficulties in the direct application of a digital reactivity meter to the subcriticality measurement. When the Inverse Kinetic method is applied to a sufficiently high power level or to a core without an external neutron source, the neutron source term may be neglected. When applied to a lower power level or in the sub-critical domain, however, the source effects must be taken in account. Furthermore, some treatments are needed in using the count rate of Source Range (SR) detector as input signal to the digital reactivity meter. To overcome these difficulties, we have proposed a digital reactivity meter combined with a methodology of the modified Neutron Source Multiplication (NSM) method with correction factors for subcriticality measurements in PWR. (author)

  6. Development of a digital reactivity meter for criticality prediction and control rod worth evaluation in pressurized water reactors

    International Nuclear Information System (INIS)

    Kuramoto, Renato Y.R.; Miranda, Anselmo F.; Valladares, Gastao Lommez; Prado, Adelk C.

    2009-01-01

    In this work, we have proposed the development of a digital reactivity meter in order to monitor subcriticality continuously during criticality approach in a PWR. A subcritical reactivity meter can provide an easy prediction of the estimated critical point prior to reactor criticality, without complicated hand calculation. Moreover, in order to reduce the interval of the Physics Tests from the economical point of view, a subcritical reactivity meter can evaluate the control rod worth from direct subcriticality measurement. In other words, count rate of Source Range (SR) detector recorded during the criticality approach could be used for subcriticality evaluation or control rod worth evaluation. Basically, a digital reactivity meter is based on the inverse solution of the kinetic equations of a reactor with the external neutron source in one-point reactor model. There are some difficulties in the direct application of a digital reactivity meter to the subcriticality measurement. When the Inverse Kinetic method is applied to a sufficiently high power level or to a core without an external neutron source, the neutron source term may be neglected. When applied to a lower power level or in the sub-critical domain, however, the source effects must be taken in account. Furthermore, some treatments are needed in using the count rate of Source Range (SR) detector as input signal to the digital reactivity meter. To overcome these difficulties, we have proposed a digital reactivity meter combined with a methodology of the modified Neutron Source Multiplication (NSM) method with correction factors for subcriticality measurements in PWR. (author)

  7. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  8. Comparison of the Predictive Performance and Interpretability of Random Forest and Linear Models on Benchmark Data Sets.

    Science.gov (United States)

    Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan

    2017-08-28

    The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical

  9. Theoretical Derivation of Simplified Evaluation Models for the First Peak of a Criticality Accident in Nuclear Fuel Solution

    International Nuclear Information System (INIS)

    Nomura, Yasushi

    2000-01-01

    In a reprocessing facility where nuclear fuel solutions are processed, one could observe a series of power peaks, with the highest peak right after a criticality accident. The criticality alarm system (CAS) is designed to detect the first power peak and warn workers near the reacting material by sounding alarms immediately. Consequently, exposure of the workers would be minimized by an immediate and effective evacuation. Therefore, in the design and installation of a CAS, it is necessary to estimate the magnitude of the first power peak and to set up the threshold point where the CAS initiates the alarm. Furthermore, it is necessary to estimate the level of potential exposure of workers in the case of accidents so as to decide the appropriateness of installing a CAS for a given compartment.A simplified evaluation model to estimate the minimum scale of the first power peak during a criticality accident is derived by theoretical considerations only for use in the design of a CAS to set up the threshold point triggering the alarm signal. Another simplified evaluation model is derived in the same way to estimate the maximum scale of the first power peak for use in judging the appropriateness for installing a CAS. Both models are shown to have adequate margin in predicting the minimum and maximum scale of criticality accidents by comparing their results with French CRiticality occurring ACcidentally (CRAC) experimental data

  10. Finding Furfural Hydrogenation Catalysts via Predictive Modelling.

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-09-10

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.

  11. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...

  12. PERSONALITY PREDISPOSITIONS IN CHINESE ADOLESCENTS: THE RELATION BETWEEN SELF-CRITICISM, DEPENDENCY, AND PROSPECTIVE INTERNALIZING SYMPTOMS

    Science.gov (United States)

    Cohen, Joseph R.; Young, Jami F.; Hankin, Benjamin L.; Yao, Shuqiao; Zhu, Xiong Zhao; Abela, John R.Z.

    2015-01-01

    The present study examined the prospective relation between two personality predispositions, self-criticism and dependency, and internalizing symptoms. Specifically, it was examined whether self-criticism and dependency predicted symptoms of depression and social anxiety, and if a moderation (e.g. diathesis-stress) or mediation model best explained the relation between the personality predispositions and emotional distress in Chinese adolescents. Participants included 1,150 adolescents (597 females and 553 males) from mainland China. Participants completed self-report measures of self-criticism, dependency, and neuroticism at baseline, and self-report measures of negative events, depressive symptoms, and social anxiety symptoms once a month for six months. Findings showed that self-criticism predicted depressive symptoms, while dependency predicted social anxiety symptoms. In addition, support was found for a mediation model, as opposed to a moderation model, with achievement stressors mediating the relation between self-criticism and depressive symptoms. Overall, these findings highlight new developmental pathways for the development of depression and social anxiety symptoms in mainland Chinese adolescents. Implications for cross-cultural developmental psychopathology research are discussed. PMID:25798026

  13. Comparing aboveground biomass predictions for an uneven-aged pine-dominated stand using local, regional, and national models

    Science.gov (United States)

    D.C. Bragg; K.M. McElligott

    2013-01-01

    Sequestration by Arkansas forests removes carbon dioxide from the atmosphere, storing this carbon in biomass that fills a number of critical ecological and socioeconomic functions. We need a better understanding of the contribution of forests to the carbon cycle, including the accurate quantification of tree biomass. Models have long been developed to predict...

  14. Critical groups vs. representative person: dose calculations due to predicted releases from USEXA

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, N.L.D., E-mail: nelson.luiz@ctmsp.mar.mil.br [Centro Tecnologico da Marinha (CTM/SP), Sao Paulo, SP (Brazil); Rochedo, E.R.R., E-mail: elainerochedo@gmail.com [Instituto de Radiprotecao e Dosimetria (lRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Mazzilli, B.P., E-mail: mazzilli@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    The critical group cf Centro Experimental Aramar (CEA) site was previously defined based 00 the effluents releases to the environment resulting from the facilities already operational at CEA. In this work, effective doses are calculated to members of the critical group considering the predicted potential uranium releases from the Uranium Hexafluoride Production Plant (USEXA). Basically, this work studies the behavior of the resulting doses related to the type of habit data used in the analysis and two distinct situations are considered: (a) the utilization of average values obtained from official institutions (IBGE, IEA-SP, CNEN, IAEA) and from the literature; and (b) the utilization of the 95{sup tb} percentile of the values derived from distributions fit to the obtained habit data. The first option corresponds to the way that data was used for the definition of the critical group of CEA done in former assessments, while the second one corresponds to the use of data in deterministic assessments, as recommended by ICRP to estimate doses to the so--called 'representative person' . (author)

  15. Critical groups vs. representative person: dose calculations due to predicted releases from USEXA

    International Nuclear Information System (INIS)

    Ferreira, N.L.D.; Rochedo, E.R.R.; Mazzilli, B.P.

    2013-01-01

    The critical group cf Centro Experimental Aramar (CEA) site was previously defined based 00 the effluents releases to the environment resulting from the facilities already operational at CEA. In this work, effective doses are calculated to members of the critical group considering the predicted potential uranium releases from the Uranium Hexafluoride Production Plant (USEXA). Basically, this work studies the behavior of the resulting doses related to the type of habit data used in the analysis and two distinct situations are considered: (a) the utilization of average values obtained from official institutions (IBGE, IEA-SP, CNEN, IAEA) and from the literature; and (b) the utilization of the 95 tb percentile of the values derived from distributions fit to the obtained habit data. The first option corresponds to the way that data was used for the definition of the critical group of CEA done in former assessments, while the second one corresponds to the use of data in deterministic assessments, as recommended by ICRP to estimate doses to the so--called 'representative person' . (author)

  16. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    Science.gov (United States)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  17. A prediction model of short-term ionospheric foF2 based on AdaBoost

    Science.gov (United States)

    Zhao, Xiukuan; Ning, Baiqi; Liu, Libo; Song, Gangbing

    2014-02-01

    In this paper, the AdaBoost-BP algorithm is used to construct a new model to predict the critical frequency of the ionospheric F2-layer (foF2) one hour ahead. Different indices were used to characterize ionospheric diurnal and seasonal variations and their dependence on solar and geomagnetic activity. These indices, together with the current observed foF2 value, were input into the prediction model and the foF2 value at one hour ahead was output. We analyzed twenty-two years' foF2 data from nine ionosonde stations in the East-Asian sector in this work. The first eleven years' data were used as a training dataset and the second eleven years' data were used as a testing dataset. The results show that the performance of AdaBoost-BP is better than those of BP Neural Network (BPNN), Support Vector Regression (SVR) and the IRI model. For example, the AdaBoost-BP prediction absolute error of foF2 at Irkutsk station (a middle latitude station) is 0.32 MHz, which is better than 0.34 MHz from BPNN, 0.35 MHz from SVR and also significantly outperforms the IRI model whose absolute error is 0.64 MHz. Meanwhile, AdaBoost-BP prediction absolute error at Taipei station from the low latitude is 0.78 MHz, which is better than 0.81 MHz from BPNN, 0.81 MHz from SVR and 1.37 MHz from the IRI model. Finally, the variety characteristics of the AdaBoost-BP prediction error along with seasonal variation, solar activity and latitude variation were also discussed in the paper.

  18. Predicting climate-induced range shifts: model differences and model reliability.

    Science.gov (United States)

    Joshua J. Lawler; Denis White; Ronald P. Neilson; Andrew R. Blaustein

    2006-01-01

    Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common data set, we investigated the potential implications of alternative modeling approaches for...

  19. Prediction of sodium critical heat flux (CHF) in annular channel using grey systems theory

    International Nuclear Information System (INIS)

    Zhou Tao; Su Guanghui; Zhang Weizhong; Qiu Suizheng; Jia Dounan

    2001-01-01

    Using grey systems theory and experimental data obtained from sodium boiling test loop in China, the grey mutual analysis of some parameters influencing sodium CHF is carried out, and the CHF values are predicted by GM(1, 1) model. The GM(1, h) model is established for CHF prediction, and the predicted CHF values are good agreement with the experimental data

  20. Predictive analytics technology review: Similarity-based modeling and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, James; Doan, Don; Gandhi, Devang; Nieman, Bill

    2010-09-15

    Over 11 years ago, SmartSignal introduced Predictive Analytics for eliminating equipment failures, using its patented SBM technology. SmartSignal continues to lead and dominate the market and, in 2010, went one step further and introduced Predictive Diagnostics. Now, SmartSignal is combining Predictive Diagnostics with RCM methodology and industry expertise. FMEA logic reengineers maintenance work management, eliminates unneeded inspections, and focuses efforts on the real issues. This integrated solution significantly lowers maintenance costs, protects against critical asset failures, and improves commercial availability, and reduces work orders 20-40%. Learn how.

  1. Predictive Modeling of a Paradigm Mechanical Cooling Tower Model: II. Optimal Best-Estimate Results with Reduced Predicted Uncertainties

    Directory of Open Access Journals (Sweden)

    Ruixian Fang

    2016-09-01

    Full Text Available This work uses the adjoint sensitivity model of the counter-flow cooling tower derived in the accompanying PART I to obtain the expressions and relative numerical rankings of the sensitivities, to all model parameters, of the following model responses: (i outlet air temperature; (ii outlet water temperature; (iii outlet water mass flow rate; and (iv air outlet relative humidity. These sensitivities are subsequently used within the “predictive modeling for coupled multi-physics systems” (PM_CMPS methodology to obtain explicit formulas for the predicted optimal nominal values for the model responses and parameters, along with reduced predicted standard deviations for the predicted model parameters and responses. These explicit formulas embody the assimilation of experimental data and the “calibration” of the model’s parameters. The results presented in this work demonstrate that the PM_CMPS methodology reduces the predicted standard deviations to values that are smaller than either the computed or the experimentally measured ones, even for responses (e.g., the outlet water flow rate for which no measurements are available. These improvements stem from the global characteristics of the PM_CMPS methodology, which combines all of the available information simultaneously in phase-space, as opposed to combining it sequentially, as in current data assimilation procedures.

  2. Predicting future glacial lakes in Austria using different modelling approaches

    Science.gov (United States)

    Otto, Jan-Christoph; Helfricht, Kay; Prasicek, Günther; Buckel, Johannes; Keuschnig, Markus

    2017-04-01

    Glacier retreat is one of the most apparent consequences of temperature rise in the 20th and 21th centuries in the European Alps. In Austria, more than 240 new lakes have formed in glacier forefields since the Little Ice Age. A similar signal is reported from many mountain areas worldwide. Glacial lakes can constitute important environmental and socio-economic impacts on high mountain systems including water resource management, sediment delivery, natural hazards, energy production and tourism. Their development significantly modifies the landscape configuration and visual appearance of high mountain areas. Knowledge on the location, number and extent of these future lakes can be used to assess potential impacts on high mountain geo-ecosystems and upland-lowland interactions. Information on new lakes is critical to appraise emerging threads and potentials for society. The recent development of regional ice thickness models and their combination with high resolution glacier surface data allows predicting the topography below current glaciers by subtracting ice thickness from glacier surface. Analyzing these modelled glacier bed surfaces reveals overdeepenings that represent potential locations for future lakes. In order to predict the location of future glacial lakes below recent glaciers in the Austrian Alps we apply different ice thickness models using high resolution terrain data and glacier outlines. The results are compared and validated with ice thickness data from geophysical surveys. Additionally, we run the models on three different glacier extents provided by the Austrian Glacier Inventories from 1969, 1998 and 2006. Results of this historical glacier extent modelling are compared to existing glacier lakes and discussed focusing on geomorphological impacts on lake evolution. We discuss model performance and observed differences in the results in order to assess the approach for a realistic prediction of future lake locations. The presentation delivers

  3. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  4. Critical discharge of initially subcooled water through slits

    International Nuclear Information System (INIS)

    Amos, C.N.; Schrock, V.E.

    1983-09-01

    This report describes an experimental investigation into the critical flow of initially subcooled water through rectangular slits. The study of such flows is relevant to the prediction of leak flow rates from cracks in piping, or pressure vessels, which contain sufficient enthalpy that vaporization will occur if they are allowed to expand to the ambient pressure. Two new analytical models, which allow for the generation of a metastable liquid phase, are developed. Experimental results are compared with the predictions of both these new models and with a Fanno Homogeneous Equilibrium Model

  5. Subchannel analysis of a critical power test, using simulated BWR 8x8 fuel assembly

    International Nuclear Information System (INIS)

    Mitsutake, T.; Terasaka, H.; Yoshimura, K.; Oishi, M.; Inoue, A.; Akiyama, M.

    1990-01-01

    Critical power predictions have been compared with the critical power test data obtained in simulated BWR 8x8 fuel rod assemblies. Two analytical methods for the critical power prediction in rod assemblies are used in the prediction, which are the subchannel analysis using the COBRA/BWR subchannel computer code with empirical critical heat flux (CHF) correlations and the liquid film dryout estimation using the CRIPP-3F 'multi-fluid' computer code. Improvements in both the analytical methods were made for spacer effect modeling, though they were specific for application to the current BWR rod assembly type. In general a reasonable agreement was obtained, though comparisons, between the prediction and the obtained test data. (orig.)

  6. Model predictive Controller for Mobile Robot

    OpenAIRE

    Alireza Rezaee

    2017-01-01

    This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...

  7. Deformation behaviors of three-dimensional graphene honeycombs under out-of-plane compression: Atomistic simulations and predictive modeling

    Science.gov (United States)

    Meng, Fanchao; Chen, Cheng; Hu, Dianyin; Song, Jun

    2017-12-01

    Combining atomistic simulations and continuum modeling, a comprehensive study of the out-of-plane compressive deformation behaviors of equilateral three-dimensional (3D) graphene honeycombs was performed. It was demonstrated that under out-of-plane compression, the honeycomb exhibits two critical deformation events, i.e., elastic mechanical instability (including elastic buckling and structural transformation) and inelastic structural collapse. The above events were shown to be strongly dependent on the honeycomb cell size and affected by the local atomic bonding at the cell junction. By treating the 3D graphene honeycomb as a continuum cellular solid, and accounting for the structural heterogeneity and constraint at the junction, a set of analytical models were developed to accurately predict the threshold stresses corresponding to the onset of those deformation events. The present study elucidates key structure-property relationships of 3D graphene honeycombs under out-of-plane compression, and provides a comprehensive theoretical framework to predictively analyze their deformation responses, and more generally, offers critical new knowledge for the rational bottom-up design of 3D networks of two-dimensional nanomaterials.

  8. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  9. Predictive modelling of wetland occurrence in KwaZulu-Natal, South Africa

    Directory of Open Access Journals (Sweden)

    Jens Hiestermann

    2015-07-01

    Full Text Available The global trend of transformation and loss of wetlands through conversion to other land uses has deleterious effects on surrounding ecosystems, and there is a resultant increasing need for the conservation and preservation of wetlands. Improved mapping of wetland locations is critical to achieving objective regional conservation goals, which depends on accurate spatial knowledge. Current approaches to mapping wetlands through the classification of satellite imagery typically under-represents actual wetland area; the importance of ancillary data in improving accuracy in mapping wetlands is therefore recognised. In this study, we compared two approaches Bayesian networks and logistic regression to predict the likelihood of wetland occurrence in KwaZulu-Natal, South Africa. Both approaches were developed using the same data set of environmental surrogate predictors. We compared and verified model outputs using an independent test data set, with analyses including receiver operating characteristic curves and area under the curve (AUC. Both models performed similarly (AUC>0.84, indicating the suitability of a likelihood approach for ancillary data for wetland mapping. Results indicated that high wetland probability areas in the final model outputs correlated well with known wetland systems and wetland-rich areas in KwaZulu-Natal. We conclude that predictive models have the potential to improve the accuracy of wetland mapping in South Africa by serving as valuable ancillary data.

  10. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  11. Double transitions, non-Ising criticality and the critical absorbing phase in an interacting monomer–dimer model on a square lattice

    International Nuclear Information System (INIS)

    Nam, Keekwon; Kim, Bongsoo; Park, Sangwoong; Lee, Sung Jong

    2011-01-01

    We present a numerical study on an interacting monomer–dimer model with nearest neighbor repulsion on a square lattice, which possesses two symmetric absorbing states. The model is observed to exhibit two nearby continuous transitions: the Z 2 symmetry-breaking order–disorder transition and the absorbing transition with directed percolation criticality. We find that the symmetry-breaking transition shows a non-Ising critical behavior, and that the absorbing phase becomes critical, in the sense that the critical decay of the dimer density observed at the absorbing transition persists even within the absorbing phase. Our findings call for further studies on microscopic models and the corresponding continuum description belonging to the generalized voter university class. (letter)

  12. The critical boundary RSOS M(3,5) model

    Science.gov (United States)

    El Deeb, O.

    2017-12-01

    We consider the critical nonunitary minimal model M(3, 5) with integrable boundaries and analyze the patterns of zeros of the eigenvalues of the transfer matrix and then determine the spectrum of the critical theory using the thermodynamic Bethe ansatz ( TBA) equations. Solving the TBA functional equation satisfied by the transfer matrices of the associated A 4 restricted solid-on-solid Forrester-Baxter lattice model in regime III in the continuum scaling limit, we derive the integral TBA equations for all excitations in the ( r, s) = (1, 1) sector and then determine their corresponding energies. We classify the excitations in terms of ( m, n) systems.

  13. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  14. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  15. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  16. An improved liquid film model to predict the CHF based on the influence of churn flow

    International Nuclear Information System (INIS)

    Wang, Ke; Bai, Bofeng; Ma, Weimin

    2014-01-01

    The critical heat flux (CHF) for boiling crisis is one of the most important parameters in thermal management and safe operation of many engineering systems. Traditionally, the liquid film flow model for “dryout” mechanism shows a good prediction in heated annular two-phase flow. However, a general assumption that the initial entrained fraction at the onset of annular flow shows a lack of reasonable physical interpretation. Since the droplets have great momentum and the length of churn flow is short, the droplets in churn flow show an inevitable effect on the downstream annular flow. To address this, we considered the effect of churn flow and developed the original liquid film flow model in vertical upward flow by suggesting that calculation starts from the onset of churn flow rather than annular flow. The results indicated satisfactory predictions with the experimental data and the developed model provided a better understanding about the effect of flow pattern on the CHF prediction. - Highlights: •The general assumption of initial entrained fraction is unreasonable. •The droplets in churn flow show an inevitable effect on downstream annular flow. •The original liquid film flow model for prediction of CHF was developed. •The integration process was modified to start from the onset of churn flow

  17. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  18. Theoretical modeling of critical temperature increase in metamaterial superconductors

    Science.gov (United States)

    Smolyaninov, Igor; Smolyaninova, Vera

    Recent experiments have demonstrated that the metamaterial approach is capable of drastic increase of the critical temperature Tc of epsilon near zero (ENZ) metamaterial superconductors. For example, tripling of the critical temperature has been observed in Al-Al2O3 ENZ core-shell metamaterials. Here, we perform theoretical modelling of Tc increase in metamaterial superconductors based on the Maxwell-Garnett approximation of their dielectric response function. Good agreement is demonstrated between theoretical modelling and experimental results in both aluminum and tin-based metamaterials. Taking advantage of the demonstrated success of this model, the critical temperature of hypothetic niobium, MgB2 and H2S-based metamaterial superconductors is evaluated. The MgB2-based metamaterial superconductors are projected to reach the liquid nitrogen temperature range. In the case of an H2S-based metamaterial Tc appears to reach 250 K. This work was supported in part by NSF Grant DMR-1104676 and the School of Emerging Technologies at Towson University.

  19. Assessment of correlations and models for the prediction of CHF in water subcooled flow boiling

    Science.gov (United States)

    Celata, G. P.; Cumo, M.; Mariani, A.

    1994-01-01

    The present paper provides an analysis of available correlations and models for the prediction of Critical Heat Flux (CHF) in subcooled flow boiling in the range of interest of fusion reactors thermal-hydraulic conditions, i.e. high inlet liquid subcooling and velocity and small channel diameter and length. The aim of the study was to establish the limits of validity of present predictive tools (most of them were proposed with reference to light water reactors (LWR) thermal-hydraulic studies) in the above conditions. The reference dataset represents almost all available data (1865 data points) covering wide ranges of operating conditions in the frame of present interest (0.1 less than p less than 8.4 MPa; 0.3 less than D less than 25.4 mm; 0.1 less than L less than 0.61 m; 2 less than G less than 90.0 Mg/sq m/s; 90 less than delta T(sub sub,in) less than 230 K). Among the tens of predictive tools available in literature four correlations (Levy, Westinghouse, modified-Tong and Tong-75) and three models (Weisman and Ileslamlou, Lee and Mudawar and Katto) were selected. The modified-Tong correlation and the Katto model seem to be reliable predictive tools for the calculation of the CHF in subcooled flow boiling.

  20. Toward Process-resolving Synthesis and Prediction of Arctic Climate Change Using the Regional Arctic System Model

    Science.gov (United States)

    Maslowski, W.

    2017-12-01

    The Regional Arctic System Model (RASM) has been developed to better understand the operation of Arctic System at process scale and to improve prediction of its change at a spectrum of time scales. RASM is a pan-Arctic, fully coupled ice-ocean-atmosphere-land model with marine biogeochemistry extension to the ocean and sea ice models. The main goal of our research is to advance a system-level understanding of critical processes and feedbacks in the Arctic and their links with the Earth System. The secondary, an equally important objective, is to identify model needs for new or additional observations to better understand such processes and to help constrain models. Finally, RASM has been used to produce sea ice forecasts for September 2016 and 2017, in contribution to the Sea Ice Outlook of the Sea Ice Prediction Network. Future RASM forecasts, are likely to include increased resolution for model components and ecosystem predictions. Such research is in direct support of the US environmental assessment and prediction needs, including those of the U.S. Navy, Department of Defense, and the recent IARPC Arctic Research Plan 2017-2021. In addition to an overview of RASM technical details, selected model results are presented from a hierarchy of climate models together with available observations in the region to better understand potential oceanic contributions to polar amplification. RASM simulations are analyzed to evaluate model skill in representing seasonal climatology as well as interannual and multi-decadal climate variability and predictions. Selected physical processes and resulting feedbacks are discussed to emphasize the need for fully coupled climate model simulations, high model resolution and sensitivity of simulated sea ice states to scale dependent model parameterizations controlling ice dynamics, thermodynamics and coupling with the atmosphere and ocean.

  1. Critical wall shear stress for the EHEDG test method

    DEFF Research Database (Denmark)

    Jensen, Bo Boye Busk; Friis, Alan

    2004-01-01

    In order to simulate the results of practical cleaning tests on closed processing equipment, based on wall shear stress predicted by computational fluid dynamics, a critical wall shear stress is required for that particular cleaning method. This work presents investigations that provide a critical...... wall shear stress of 3 Pa for the standardised EHEDG cleaning test method. The cleaning tests were performed on a test disc placed in a radial flowcell assay. Turbulent flow conditions were generated and the corresponding wall shear stresses were predicted from CFD simulations. Combining wall shear...... stress predictions from a simulation using the low Re k-epsilon and one using the two-layer model of Norris and Reynolds were found to produce reliable predictions compared to empirical solutions for the ideal flow case. The comparison of wall shear stress curves predicted for the real RFC...

  2. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  3. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  4. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    Science.gov (United States)

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines

  5. Theoretical prediction method of subcooled flow boiling CHF

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min; Chang, Soon Heung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1999-12-31

    A theoretical critical heat flux (CHF ) model, based on lateral bubble coalescence on the heated wall, is proposed to predict the subcooled flow boiling CHF in a uniformly heated vertical tube. The model is based on the concept that a single layer of bubbles contacted to the heated wall prevents a bulk liquid from reaching the wall at near CHF condition. Comparisons between the model predictions and experimental data result in satisfactory agreement within less than 9.73% root-mean-square error by the appropriate choice of the critical void fraction in the bubbly layer. The present model shows comparable performance with the CHF look-up table of Groeneveld et al.. 28 refs., 11 figs., 1 tab. (Author)

  6. Theoretical prediction method of subcooled flow boiling CHF

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min; Chang, Soon Heung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A theoretical critical heat flux (CHF ) model, based on lateral bubble coalescence on the heated wall, is proposed to predict the subcooled flow boiling CHF in a uniformly heated vertical tube. The model is based on the concept that a single layer of bubbles contacted to the heated wall prevents a bulk liquid from reaching the wall at near CHF condition. Comparisons between the model predictions and experimental data result in satisfactory agreement within less than 9.73% root-mean-square error by the appropriate choice of the critical void fraction in the bubbly layer. The present model shows comparable performance with the CHF look-up table of Groeneveld et al.. 28 refs., 11 figs., 1 tab. (Author)

  7. New relation for critical exponents in the Ising model

    International Nuclear Information System (INIS)

    Pishtshev, A.

    2007-01-01

    The Ising model in a transverse field is considered at T=0. From the analysis of the power low behaviors of the energy gap and the order parameter as functions of the field a new relation between the respective critical exponents, β>=1/(8s 2 ), is derived. By using the Suzuki equivalence from this inequality a new relation for critical exponents in the Ising model, β>=1/(8ν 2 ), is obtained. A number of numerical examples for different cases illustrates the generality and validity of the relation. By applying this relation the estimation ν=(1/4) 1/3 ∼0.62996 for the 3D-Ising model is proposed

  8. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  9. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  10. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  11. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  12. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  13. How Adverse Outcome Pathways Can Aid the Development and Use of Computational Prediction Models for Regulatory Toxicology

    Energy Technology Data Exchange (ETDEWEB)

    Wittwehr, Clemens; Aladjov, Hristo; Ankley, Gerald; Byrne, Hugh J.; de Knecht, Joop; Heinzle, Elmar; Klambauer, Günter; Landesmann, Brigitte; Luijten, Mirjam; MacKay, Cameron; Maxwell, Gavin; Meek, M. E. (Bette); Paini, Alicia; Perkins, Edward; Sobanski, Tomasz; Villeneuve, Dan; Waters, Katrina M.; Whelan, Maurice

    2016-12-19

    Efforts are underway to transform regulatory toxicology and chemical safety assessment from a largely empirical science based on direct observation of apical toxicity outcomes in whole organism toxicity tests to a predictive one in which outcomes and risk are inferred from accumulated mechanistic understanding. The adverse outcome pathway (AOP) framework has emerged as a systematic approach for organizing knowledge that supports such inference. We argue that this systematic organization of knowledge can inform and help direct the design and development of computational prediction models that can further enhance the utility of mechanistic and in silico data for chemical safety assessment. Examples of AOP-informed model development and its application to the assessment of chemicals for skin sensitization and multiple modes of endocrine disruption are provided. The role of problem formulation, not only as a critical phase of risk assessment, but also as guide for both AOP and complementary model development described. Finally, a proposal for actively engaging the modeling community in AOP-informed computational model development is made. The contents serve as a vision for how AOPs can be leveraged to facilitate development of computational prediction models needed to support the next generation of chemical safety assessment.

  14. Quantum critical Hall exponents

    CERN Document Server

    Lütken, C A

    2014-01-01

    We investigate a finite size "double scaling" hypothesis using data from an experiment on a quantum Hall system with short range disorder [1-3]. For Hall bars of width w at temperature T the scaling form is w(-mu)T(-kappa), where the critical exponent mu approximate to 0.23 we extract from the data is comparable to the multi-fractal exponent alpha(0) - 2 obtained from the Chalker-Coddington (CC) model [4]. We also use the data to find the approximate location (in the resistivity plane) of seven quantum critical points, all of which closely agree with the predictions derived long ago from the modular symmetry of a toroidal sigma-model with m matter fields [5]. The value nu(8) = 2.60513 ... of the localisation exponent obtained from the m = 8 model is in excellent agreement with the best available numerical value nu(num) = 2.607 +/- 0.004 derived from the CC-model [6]. Existing experimental data appear to favour the m = 9 model, suggesting that the quantum Hall system is not in the same universality class as th...

  15. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  16. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  17. Sweat loss prediction using a multi-model approach.

    Science.gov (United States)

    Xu, Xiaojiang; Santee, William R

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  18. Universal signatures of fractionalized quantum critical points.

    Science.gov (United States)

    Isakov, Sergei V; Melko, Roger G; Hastings, Matthew B

    2012-01-13

    Ground states of certain materials can support exotic excitations with a charge equal to a fraction of the fundamental electron charge. The condensation of these fractionalized particles has been predicted to drive unusual quantum phase transitions. Through numerical and theoretical analysis of a physical model of interacting lattice bosons, we establish the existence of such an exotic critical point, called XY*. We measure a highly nonclassical critical exponent η = 1.493 and construct a universal scaling function of winding number distributions that directly demonstrates the distinct topological sectors of an emergent Z(2) gauge field. The universal quantities used to establish this exotic transition can be used to detect other fractionalized quantum critical points in future model and material systems.

  19. Bak-Tang-Wiesenfeld model in the upper critical dimension: Induced criticality in lower-dimensional subsystems

    Science.gov (United States)

    Dashti-Naserabadi, H.; Najafi, M. N.

    2017-10-01

    We present extensive numerical simulations of Bak-Tang-Wiesenfeld (BTW) sandpile model on the hypercubic lattice in the upper critical dimension Du=4 . After re-extracting the critical exponents of avalanches, we concentrate on the three- and two-dimensional (2D) cross sections seeking for the induced criticality which are reflected in the geometrical and local exponents. Various features of finite-size scaling (FSS) theory have been tested and confirmed for all dimensions. The hyperscaling relations between the exponents of the distribution functions and the fractal dimensions are shown to be valid for all dimensions. We found that the exponent of the distribution function of avalanche mass is the same for the d -dimensional cross sections and the d -dimensional BTW model for d =2 and 3. The geometrical quantities, however, have completely different behaviors with respect to the same-dimensional BTW model. By analyzing the FSS theory for the geometrical exponents of the two-dimensional cross sections, we propose that the 2D induced models have degrees of similarity with the Gaussian free field (GFF). Although some local exponents are slightly different, this similarity is excellent for the fractal dimensions. The most important one showing this feature is the fractal dimension of loops df, which is found to be 1.50 ±0.02 ≈3/2 =dfGFF .

  20. The critical domain size of stochastic population models.

    Science.gov (United States)

    Reimer, Jody R; Bonsall, Michael B; Maini, Philip K

    2017-02-01

    Identifying the critical domain size necessary for a population to persist is an important question in ecology. Both demographic and environmental stochasticity impact a population's ability to persist. Here we explore ways of including this variability. We study populations with distinct dispersal and sedentary stages, which have traditionally been modelled using a deterministic integrodifference equation (IDE) framework. Individual-based models (IBMs) are the most intuitive stochastic analogues to IDEs but yield few analytic insights. We explore two alternate approaches; one is a scaling up to the population level using the Central Limit Theorem, and the other a variation on both Galton-Watson branching processes and branching processes in random environments. These branching process models closely approximate the IBM and yield insight into the factors determining the critical domain size for a given population subject to stochasticity.

  1. prediction of shear resistance factor in flat slabs design using critical

    African Journals Online (AJOL)

    user

    The provisions of the American, Canadian, European and. Model codes, regarding the ... is the applied shear stress. W1 .... perimeter implies a smaller stress while a smaller critical .... should be compared with values provided in this work to validate ... [1] American Concrete Institute : Building code requirement for structural ...

  2. Depositional sequence analysis and sedimentologic modeling for improved prediction of Pennsylvanian reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Watney, W.L.

    1994-12-01

    Reservoirs in the Lansing-Kansas City limestone result from complex interactions among paleotopography (deposition, concurrent structural deformation), sea level, and diagenesis. Analysis of reservoirs and surface and near-surface analogs has led to developing a {open_quotes}strandline grainstone model{close_quotes} in which relative sea-level stabilized during regressions, resulting in accumulation of multiple grainstone buildups along depositional strike. Resulting stratigraphy in these carbonate units are generally predictable correlating to inferred topographic elevation along the shelf. This model is a valuable predictive tool for (1) locating favorable reservoirs for exploration, and (2) anticipating internal properties of the reservoir for field development. Reservoirs in the Lansing-Kansas City limestones are developed in both oolitic and bioclastic grainstones, however, re-analysis of oomoldic reservoirs provides the greatest opportunity for developing bypassed oil. A new technique, the {open_quotes}Super{close_quotes} Pickett crossplot (formation resistivity vs. porosity) and its use in an integrated petrophysical characterization, has been developed to evaluate extractable oil remaining in these reservoirs. The manual method in combination with 3-D visualization and modeling can help to target production limiting heterogeneities in these complex reservoirs and moreover compute critical parameters for the field such as bulk volume water. Application of this technique indicates that from 6-9 million barrels of Lansing-Kansas City oil remain behind pipe in the Victory-Northeast Lemon Fields. Petroleum geologists are challenged to quantify inferred processes to aid in developing rationale geologically consistent models of sedimentation so that acceptable levels of prediction can be obtained.

  3. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  4. Alcator C-Mod predictive modeling

    International Nuclear Information System (INIS)

    Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas

    2001-01-01

    Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles

  5. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  6. Climate-based models for pulsed resources improve predictability of consumer population dynamics: outbreaks of house mice in forest ecosystems.

    Directory of Open Access Journals (Sweden)

    E Penelope Holland

    Full Text Available Accurate predictions of the timing and magnitude of consumer responses to episodic seeding events (masts are important for understanding ecosystem dynamics and for managing outbreaks of invasive species generated by masts. While models relating consumer populations to resource fluctuations have been developed successfully for a range of natural and modified ecosystems, a critical gap that needs addressing is better prediction of resource pulses. A recent model used change in summer temperature from one year to the next (ΔT for predicting masts for forest and grassland plants in New Zealand. We extend this climate-based method in the framework of a model for consumer-resource dynamics to predict invasive house mouse (Mus musculus outbreaks in forest ecosystems. Compared with previous mast models based on absolute temperature, the ΔT method for predicting masts resulted in an improved model for mouse population dynamics. There was also a threshold effect of ΔT on the likelihood of an outbreak occurring. The improved climate-based method for predicting resource pulses and consumer responses provides a straightforward rule of thumb for determining, with one year's advance warning, whether management intervention might be required in invaded ecosystems. The approach could be applied to consumer-resource systems worldwide where climatic variables are used to model the size and duration of resource pulses, and may have particular relevance for ecosystems where global change scenarios predict increased variability in climatic events.

  7. Soft-Cliff Retreat, Self-Organized Critical Phenomena in the Limit of Predictability?

    Science.gov (United States)

    Paredes, Carlos; Godoy, Clara; Castedo, Ricardo

    2015-03-01

    The coastal erosion along the world's coastlines is a natural process that occurs through the actions of marine and subaerial physico-chemical phenomena, waves, tides, and currents. The development of cliff erosion predictive models is limited due to the complex interactions between environmental processes and material properties over a wide range of temporal and spatial scales. As a result of this erosive action, gravity driven mass movements occur and the coastline moves inland. Like other studied earth natural and synthetically modelled phenomena characterized as self-organized critical (SOC), the recession of the cliff has a seemingly random, sporadic behavior, with a wide range of yearly recession rate values probabilistically distributed by a power-law. Usually, SOC systems are defined by a number of scaling features in the size distribution of its parameters and on its spatial and/or temporal pattern. Particularly, some previous studies of derived parameters from slope movements catalogues, have allowed detecting certain SOC features in this phenomenon, which also shares the recession of cliffs. Due to the complexity of the phenomenon and, as for other natural processes, there is no definitive model of recession of coastal cliffs. In this work, various analysis techniques have been applied to identify SOC features in the distribution and pattern to a particular case: the Holderness shoreline. This coast is a great case study to use when examining coastal processes and the structures associated with them. It is one of World's fastest eroding coastlines (2 m/yr in average, max observed 22 m/yr). Cliffs, ranging from 2 m up to 35 m in height, and made up of glacial tills, mainly compose this coast. It is this soft boulder clay that is being rapidly eroded and where coastline recession measurements have been recorded by the Cliff Erosion Monitoring Program (East Riding of Yorkshire Council, UK). The original database has been filtered by grouping contiguous

  8. Predictive Modelling of Heavy Metals in Urban Lakes

    OpenAIRE

    Lindström, Martin

    2000-01-01

    Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes. Sediment and water data for this study were collected from ten small lakes in the ...

  9. Combination of inquiry learning model and computer simulation to improve mastery concept and the correlation with critical thinking skills (CTS)

    Science.gov (United States)

    Nugraha, Muhamad Gina; Kaniawati, Ida; Rusdiana, Dadi; Kirana, Kartika Hajar

    2016-02-01

    Among the purposes of physics learning at high school is to master the physics concepts and cultivate scientific attitude (including critical attitude), develop inductive and deductive reasoning skills. According to Ennis et al., inductive and deductive reasoning skills are part of critical thinking. Based on preliminary studies, both of the competence are lack achieved, it is seen from student learning outcomes is low and learning processes that are not conducive to cultivate critical thinking (teacher-centered learning). One of learning model that predicted can increase mastery concepts and train CTS is inquiry learning model aided computer simulations. In this model, students were given the opportunity to be actively involved in the experiment and also get a good explanation with the computer simulations. From research with randomized control group pretest-posttest design, we found that the inquiry learning model aided computer simulations can significantly improve students' mastery concepts than the conventional (teacher-centered) method. With inquiry learning model aided computer simulations, 20% of students have high CTS, 63.3% were medium and 16.7% were low. CTS greatly contribute to the students' mastery concept with a correlation coefficient of 0.697 and quite contribute to the enhancement mastery concept with a correlation coefficient of 0.603.

  10. Genomic prediction using subsampling.

    Science.gov (United States)

    Xavier, Alencar; Xu, Shizhong; Muir, William; Rainey, Katy Martin

    2017-03-24

    Genome-wide assisted selection is a critical tool for the genetic improvement of plants and animals. Whole-genome regression models in Bayesian framework represent the main family of prediction methods. Fitting such models with a large number of observations involves a prohibitive computational burden. We propose the use of subsampling bootstrap Markov chain in genomic prediction. Such method consists of fitting whole-genome regression models by subsampling observations in each round of a Markov Chain Monte Carlo. We evaluated the effect of subsampling bootstrap on prediction and computational parameters. Across datasets, we observed an optimal subsampling proportion of observations around 50% with replacement, and around 33% without replacement. Subsampling provided a substantial decrease in computation time, reducing the time to fit the model by half. On average, losses on predictive properties imposed by subsampling were negligible, usually below 1%. For each dataset, an optimal subsampling point that improves prediction properties was observed, but the improvements were also negligible. Combining subsampling with Gibbs sampling is an interesting ensemble algorithm. The investigation indicates that the subsampling bootstrap Markov chain algorithm substantially reduces computational burden associated with model fitting, and it may slightly enhance prediction properties.

  11. Critical properties of the double-frequency sine-Gordon model with applications

    International Nuclear Information System (INIS)

    Fabrizio, M.; Gogolin, A.O.; Nersesyan, A.A.

    2000-01-01

    We study the properties of the double-frequency sine-Gordon model in the vicinity of the Ising quantum phase transition displayed by this model. Using a mapping onto a generalized lattice quantum Ashkin-Teller model, we obtain critical and nearly-off-critical correlation functions of various operators. We discuss applications of the double-sine-Gordon model to one-dimensional physical systems, like spin chains in a staggered external field and interacting electrons in a staggered potential

  12. Soil-pipe interaction modeling for pipe behavior prediction with super learning based methods

    Science.gov (United States)

    Shi, Fang; Peng, Xiang; Liu, Huan; Hu, Yafei; Liu, Zheng; Li, Eric

    2018-03-01

    Underground pipelines are subject to severe distress from the surrounding expansive soil. To investigate the structural response of water mains to varying soil movements, field data, including pipe wall strains in situ soil water content, soil pressure and temperature, was collected. The research on monitoring data analysis has been reported, but the relationship between soil properties and pipe deformation has not been well-interpreted. To characterize the relationship between soil property and pipe deformation, this paper presents a super learning based approach combining feature selection algorithms to predict the water mains structural behavior in different soil environments. Furthermore, automatic variable selection method, e.i. recursive feature elimination algorithm, were used to identify the critical predictors contributing to the pipe deformations. To investigate the adaptability of super learning to different predictive models, this research employed super learning based methods to three different datasets. The predictive performance was evaluated by R-squared, root-mean-square error and mean absolute error. Based on the prediction performance evaluation, the superiority of super learning was validated and demonstrated by predicting three types of pipe deformations accurately. In addition, a comprehensive understand of the water mains working environments becomes possible.

  13. Critical analysis of algebraic collective models

    International Nuclear Information System (INIS)

    Moshinsky, M.

    1986-01-01

    The author shall understand by algebraic collective models all those based on specific Lie algebras, whether the latter are suggested through simple shell model considerations like in the case of the Interacting Boson Approximation (IBA), or have a detailed microscopic foundation like the symplectic model. To analyze these models critically, it is convenient to take a simple conceptual example of them in which all steps can be implemented analytically or through elementary numerical analysis. In this note he takes as an example the symplectic model in a two dimensional space i.e. based on a sp(4,R) Lie algebra, and show how through its complete discussion we can get a clearer understanding of the structure of algebraic collective models of nuclei. In particular he discusses the association of Hamiltonians, related to maximal subalgebras of our basic Lie algebra, with specific types of spectra, and the connections between spectra and shapes

  14. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  16. A multivariate model for predicting segmental body composition.

    Science.gov (United States)

    Tian, Simiao; Mioche, Laurence; Denis, Jean-Baptiste; Morio, Béatrice

    2013-12-01

    The aims of the present study were to propose a multivariate model for predicting simultaneously body, trunk and appendicular fat and lean masses from easily measured variables and to compare its predictive capacity with that of the available univariate models that predict body fat percentage (BF%). The dual-energy X-ray absorptiometry (DXA) dataset (52% men and 48% women) with White, Black and Hispanic ethnicities (1999-2004, National Health and Nutrition Examination Survey) was randomly divided into three sub-datasets: a training dataset (TRD), a test dataset (TED); a validation dataset (VAD), comprising 3835, 1917 and 1917 subjects. For each sex, several multivariate prediction models were fitted from the TRD using age, weight, height and possibly waist circumference. The most accurate model was selected from the TED and then applied to the VAD and a French DXA dataset (French DB) (526 men and 529 women) to assess the prediction accuracy in comparison with that of five published univariate models, for which adjusted formulas were re-estimated using the TRD. Waist circumference was found to improve the prediction accuracy, especially in men. For BF%, the standard error of prediction (SEP) values were 3.26 (3.75) % for men and 3.47 (3.95)% for women in the VAD (French DB), as good as those of the adjusted univariate models. Moreover, the SEP values for the prediction of body and appendicular lean masses ranged from 1.39 to 2.75 kg for both the sexes. The prediction accuracy was best for age < 65 years, BMI < 30 kg/m2 and the Hispanic ethnicity. The application of our multivariate model to large populations could be useful to address various public health issues.

  17. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2018-01-01

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  19. Dynamic Simulation of Human Gait Model With Predictive Capability.

    Science.gov (United States)

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  20. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  1. Prediction of critical heat flux in fuel assemblies using a CHF table method

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Tae Hyun; Hwang, Dae Hyun; Bang, Je Geon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Baek, Won Pil; Chang, Soon Heung [Korea Advance Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    A CHF table method has been assessed in this study for rod bundle CHF predictions. At the conceptual design stage for a new reactor, a general critical heat flux (CHF) prediction method with a wide applicable range and reasonable accuracy is essential to the thermal-hydraulic design and safety analysis. In many aspects, a CHF table method (i.e., the use of a round tube CHF table with appropriate bundle correction factors) can be a promising way to fulfill this need. So the assessment of the CHF table method has been performed with the bundle CHF data relevant to pressurized water reactors (PWRs). For comparison purposes, W-3R and EPRI-1 were also applied to the same data base. Data analysis has been conducted with the subchannel code COBRA-IV-I. The CHF table method shows the best predictions based on the direct substitution method. Improvements of the bundle correction factors, especially for the spacer grid and cold wall effects, are desirable for better predictions. Though the present assessment is somewhat limited in both fuel geometries and operating conditions, the CHF table method clearly shows potential to be a general CHF predictor. 8 refs., 3 figs., 3 tabs. (Author)

  2. Prediction of critical heat flux in fuel assemblies using a CHF table method

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Tae Hyun; Hwang, Dae Hyun; Bang, Je Geon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Baek, Won Pil; Chang, Soon Heung [Korea Advance Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    A CHF table method has been assessed in this study for rod bundle CHF predictions. At the conceptual design stage for a new reactor, a general critical heat flux (CHF) prediction method with a wide applicable range and reasonable accuracy is essential to the thermal-hydraulic design and safety analysis. In many aspects, a CHF table method (i.e., the use of a round tube CHF table with appropriate bundle correction factors) can be a promising way to fulfill this need. So the assessment of the CHF table method has been performed with the bundle CHF data relevant to pressurized water reactors (PWRs). For comparison purposes, W-3R and EPRI-1 were also applied to the same data base. Data analysis has been conducted with the subchannel code COBRA-IV-I. The CHF table method shows the best predictions based on the direct substitution method. Improvements of the bundle correction factors, especially for the spacer grid and cold wall effects, are desirable for better predictions. Though the present assessment is somewhat limited in both fuel geometries and operating conditions, the CHF table method clearly shows potential to be a general CHF predictor. 8 refs., 3 figs., 3 tabs. (Author)

  3. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  5. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  6. Prediction of resource volumes at untested locations using simple local prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  7. Load-Unload Response Ratio and Accelerating Moment/Energy Release Critical Region Scaling and Earthquake Prediction

    Science.gov (United States)

    Yin, X. C.; Mora, P.; Peng, K.; Wang, Y. C.; Weatherley, D.

    The main idea of the Load-Unload Response Ratio (LURR) is that when a system is stable, its response to loading corresponds to its response to unloading, whereas when the system is approaching an unstable state, the response to loading and unloading becomes quite different. High LURR values and observations of Accelerating Moment/Energy Release (AMR/AER) prior to large earthquakes have led different research groups to suggest intermediate-term earthquake prediction is possible and imply that the LURR and AMR/AER observations may have a similar physical origin. To study this possibility, we conducted a retrospective examination of several Australian and Chinese earthquakes with magnitudes ranging from 5.0 to 7.9, including Australia's deadly Newcastle earthquake and the devastating Tangshan earthquake. Both LURR values and best-fit power-law time-to-failure functions were computed using data within a range of distances from the epicenter. Like the best-fit power-law fits in AMR/AER, the LURR value was optimal using data within a certain epicentral distance implying a critical region for LURR. Furthermore, LURR critical region size scales with mainshock magnitude and is similar to the AMR/AER critical region size. These results suggest a common physical origin for both the AMR/AER and LURR observations. Further research may provide clues that yield an understanding of this mechanism and help lead to a solid foundation for intermediate-term earthquake prediction.

  8. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  9. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  10. A grey NGM(1,1, k) self-memory coupling prediction model for energy consumption prediction.

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.

  11. Critically Important Object Security System Element Model

    Directory of Open Access Journals (Sweden)

    I. V. Khomyackov

    2012-03-01

    Full Text Available A stochastic model of critically important object security system element has been developed. The model includes mathematical description of the security system element properties and external influences. The state evolution of the security system element is described by the semi-Markov process with finite states number, the semi-Markov matrix and the initial semi-Markov process states probabilities distribution. External influences are set with the intensity of the Poisson thread.

  12. Risk predictive modelling for diabetes and cardiovascular disease.

    Science.gov (United States)

    Kengne, Andre Pascal; Masconi, Katya; Mbanya, Vivian Nchanchou; Lekoubou, Alain; Echouffo-Tcheugui, Justin Basile; Matsha, Tandi E

    2014-02-01

    Absolute risk models or clinical prediction models have been incorporated in guidelines, and are increasingly advocated as tools to assist risk stratification and guide prevention and treatments decisions relating to common health conditions such as cardiovascular disease (CVD) and diabetes mellitus. We have reviewed the historical development and principles of prediction research, including their statistical underpinning, as well as implications for routine practice, with a focus on predictive modelling for CVD and diabetes. Predictive modelling for CVD risk, which has developed over the last five decades, has been largely influenced by the Framingham Heart Study investigators, while it is only ∼20 years ago that similar efforts were started in the field of diabetes. Identification of predictive factors is an important preliminary step which provides the knowledge base on potential predictors to be tested for inclusion during the statistical derivation of the final model. The derived models must then be tested both on the development sample (internal validation) and on other populations in different settings (external validation). Updating procedures (e.g. recalibration) should be used to improve the performance of models that fail the tests of external validation. Ultimately, the effect of introducing validated models in routine practice on the process and outcomes of care as well as its cost-effectiveness should be tested in impact studies before wide dissemination of models beyond the research context. Several predictions models have been developed for CVD or diabetes, but very few have been externally validated or tested in impact studies, and their comparative performance has yet to be fully assessed. A shift of focus from developing new CVD or diabetes prediction models to validating the existing ones will improve their adoption in routine practice.

  13. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...

  14. Survival prediction model for postoperative hepatocellular carcinoma patients.

    Science.gov (United States)

    Ren, Zhihui; He, Shasha; Fan, Xiaotang; He, Fangping; Sang, Wei; Bao, Yongxing; Ren, Weixin; Zhao, Jinming; Ji, Xuewen; Wen, Hao

    2017-09-01

    This study is to establish a predictive index (PI) model of 5-year survival rate for patients with hepatocellular carcinoma (HCC) after radical resection and to evaluate its prediction sensitivity, specificity, and accuracy.Patients underwent HCC surgical resection were enrolled and randomly divided into prediction model group (101 patients) and model evaluation group (100 patients). Cox regression model was used for univariate and multivariate survival analysis. A PI model was established based on multivariate analysis and receiver operating characteristic (ROC) curve was drawn accordingly. The area under ROC (AUROC) and PI cutoff value was identified.Multiple Cox regression analysis of prediction model group showed that neutrophil to lymphocyte ratio, histological grade, microvascular invasion, positive resection margin, number of tumor, and postoperative transcatheter arterial chemoembolization treatment were the independent predictors for the 5-year survival rate for HCC patients. The model was PI = 0.377 × NLR + 0.554 × HG + 0.927 × PRM + 0.778 × MVI + 0.740 × NT - 0.831 × transcatheter arterial chemoembolization (TACE). In the prediction model group, AUROC was 0.832 and the PI cutoff value was 3.38. The sensitivity, specificity, and accuracy were 78.0%, 80%, and 79.2%, respectively. In model evaluation group, AUROC was 0.822, and the PI cutoff value was well corresponded to the prediction model group with sensitivity, specificity, and accuracy of 85.0%, 83.3%, and 84.0%, respectively.The PI model can quantify the mortality risk of hepatitis B related HCC with high sensitivity, specificity, and accuracy.

  15. Predicting fatigue and psychophysiological test performance from speech for safety critical environments

    Directory of Open Access Journals (Sweden)

    Khan Richard Baykaner

    2015-08-01

    Full Text Available Automatic systems for estimating operator fatigue have application in safety-critical environments. A system which could estimate level of fatigue from speech would have application in domains where operators engage in regular verbal communication as part of their duties. Previous studies on the prediction of fatigue from speech have been limited because of their reliance on subjective ratings and because they lack comparison to other methods for assessing fatigue. In this paper we present an analysis of voice recordings and psychophysiological test scores collected from seven aerospace personnel during a training task in which they remained awake for 60 hours. We show that voice features and test scores are affected by both the total time spent awake and the time position within each subject’s circadian cycle. However, we show that time spent awake and time of day information are poor predictors of the test results; while voice features can give good predictions of the psychophysiological test scores and sleep latency. Mean absolute errors of prediction are possible within about 17.5% for sleep latency and 5-12% for test scores. We discuss the implications for the use of voice as a means to monitor the effects of fatigue on cognitive performance in practical applications.

  16. Methodology for Designing Models Predicting Success of Infertility Treatment

    OpenAIRE

    Alireza Zarinara; Mohammad Mahdi Akhondi; Hojjat Zeraati; Koorsh Kamali; Kazem Mohammad

    2016-01-01

    Abstract Background: The prediction models for infertility treatment success have presented since 25 years ago. There are scientific principles for designing and applying the prediction models that is also used to predict the success rate of infertility treatment. The purpose of this study is to provide basic principles for designing the model to predic infertility treatment success. Materials and Methods: In this paper, the principles for developing predictive models are explained and...

  17. EFFECT OF PROBLEM BASED LEARNING AND MODEL CRITICAL THINKING ABILITY TO PROBLEM SOLVING SKILLS

    Directory of Open Access Journals (Sweden)

    Unita S. Zuliani Nasution

    2016-12-01

    Full Text Available The purposes of this research were to analyze the different between physic resolving problem ability by using problem based learning model and direct instruction model, the different of physic resolving problem ability between the students that have critical thinking ability upper the average and the students that have critical thinking ability under the average, and the interaction of problem based learning model toward critical thinking ability and students’ physic resolving problem ability. This research was quasy experimental research that use critical thinking ability tests and physic resolving problem ability tests as the instruments. Result of the research showed that the students’ physic resolving problem ability by using problem based learning model was better than by using direct instruction model, students’ physic resolving problem ability and critical thinking ability upper the average showed better different and result than students’ critical thinking ability under the average, besides there was an interaction between problem based learning model and critical thinking ability in improving students’ physic resolving problem ability.

  18. A Comprehensive Assessment Model for Critical Infrastructure Protection

    Directory of Open Access Journals (Sweden)

    Häyhtiö Markus

    2017-12-01

    Full Text Available International business demands seamless service and IT-infrastructure throughout the entire supply chain. However, dependencies between different parts of this vulnerable ecosystem form a fragile web. Assessment of the financial effects of any abnormalities in any part of the network is demanded in order to protect this network in a financially viable way. Contractual environment between the actors in a supply chain, different business domains and functions requires a management model, which enables a network wide protection for critical infrastructure. In this paper authors introduce such a model. It can be used to assess financial differences between centralized and decentralized protection of critical infrastructure. As an end result of this assessment business resilience to unknown threats can be improved across the entire supply chain.

  19. Long-term modelling of nitrogen turnover and critical loads in a forested catchment using the INCA model

    Directory of Open Access Journals (Sweden)

    J.-J. Langusch

    2002-01-01

    Full Text Available Many forest ecosystems in Central Europe have reached the status of N saturation due to chronically high N deposition. In consequence, the NO3 leaching into ground- and surface waters is often substantial. Critical loads have been defined to abate the negative consequences of the NO3 leaching such as soil acidification and nutrient losses. The steady state mass balance method is normally used to calculate critical loads for N deposition in forest ecosystems. However, the steady state mass balance approach is limited because it does not take into account hydrology and the time until the steady state is reached. The aim of this study was to test the suitability of another approach: the dynamic model INCA (Integrated Nitrogen Model for European Catchments. Long-term effects of changing N deposition and critical loads for N were simulated using INCA for the Lehstenbach spruce catchment (Fichtelgebirge, NE Bavaria, Germany under different hydrological conditions. Long-term scenarios of either increasing or decreasing N deposition indicated that, in this catchment, the response of nitrate concentrations in runoff to changing N deposition is buffered by a large groundwater reservoir. The critical load simulated by the INCA model with respect to a nitrate concentration of 0.4 mg N l–1 as threshold value in runoff was 9.7 kg N ha–1yr–1 compared to 10 kg ha–1yr–1 for the steady state model. Under conditions of lower precipitation (520 mm the resulting critical load was 7.7 kg N ha–1yr–1 , suggesting the necessity to account for different hydrological conditions when calculating critical loads. The INCA model seems to be suitable to calculate critical loads for N in forested catchments under varying hydrological conditions e.g. as a consequence of climate change. Keywords: forest ecosystem, N saturation, critical load, modelling, long-term scenario, nitrate leaching, critical loads reduction, INCA

  20. Skill of Predicting Heavy Rainfall Over India: Improvement in Recent Years Using UKMO Global Model

    Science.gov (United States)

    Sharma, Kuldeep; Ashrit, Raghavendra; Bhatla, R.; Mitra, A. K.; Iyengar, G. R.; Rajagopal, E. N.

    2017-11-01

    The quantitative precipitation forecast (QPF) performance for heavy rains is still a challenge, even for the most advanced state-of-art high-resolution Numerical Weather Prediction (NWP) modeling systems. This study aims to evaluate the performance of UK Met Office Unified Model (UKMO) over India for prediction of high rainfall amounts (>2 and >5 cm/day) during the monsoon period (JJAS) from 2007 to 2015 in short range forecast up to Day 3. Among the various modeling upgrades and improvements in the parameterizations during this period, the model horizontal resolution has seen an improvement from 40 km in 2007 to 17 km in 2015. Skill of short range rainfall forecast has improved in UKMO model in recent years mainly due to increased horizontal and vertical resolution along with improved physics schemes. Categorical verification carried out using the four verification metrics, namely, probability of detection (POD), false alarm ratio (FAR), frequency bias (Bias) and Critical Success Index, indicates that QPF has improved by >29 and >24% in case of POD and FAR. Additionally, verification scores like EDS (Extreme Dependency Score), EDI (Extremal Dependence Index) and SEDI (Symmetric EDI) are used with special emphasis on verification of extreme and rare rainfall events. These scores also show an improvement by 60% (EDS) and >34% (EDI and SEDI) during the period of study, suggesting an improved skill of predicting heavy rains.

  1. Automatic generation of predictive dynamic models reveals nuclear phosphorylation as the key Msn2 control mechanism.

    Science.gov (United States)

    Sunnåker, Mikael; Zamora-Sillero, Elias; Dechant, Reinhard; Ludwig, Christina; Busetto, Alberto Giovanni; Wagner, Andreas; Stelling, Joerg

    2013-05-28

    Predictive dynamical models are critical for the analysis of complex biological systems. However, methods to systematically develop and discriminate among systems biology models are still lacking. We describe a computational method that incorporates all hypothetical mechanisms about the architecture of a biological system into a single model and automatically generates a set of simpler models compatible with observational data. As a proof of principle, we analyzed the dynamic control of the transcription factor Msn2 in Saccharomyces cerevisiae, specifically the short-term mechanisms mediating the cells' recovery after release from starvation stress. Our method determined that 12 of 192 possible models were compatible with available Msn2 localization data. Iterations between model predictions and rationally designed phosphoproteomics and imaging experiments identified a single-circuit topology with a relative probability of 99% among the 192 models. Model analysis revealed that the coupling of dynamic phenomena in Msn2 phosphorylation and transport could lead to efficient stress response signaling by establishing a rate-of-change sensor. Similar principles could apply to mammalian stress response pathways. Systematic construction of dynamic models may yield detailed insight into nonobvious molecular mechanisms.

  2. Uncertainty quantification's role in modeling and simulation planning, and credibility assessment through the predictive capability maturity model

    Energy Technology Data Exchange (ETDEWEB)

    Rider, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Witkowski, Walter R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mousseau, Vincent Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-04-13

    The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationally simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). There were several critical activities that follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. Here, we introduce the subject matter for general applications but specifics are given for the failure prediction project. In addition, the first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.

  3. Assessment of critical flow models of RELAP5-MOD2 and CATHARE codes

    International Nuclear Information System (INIS)

    Hao Laomi; Zhu Zhanchuan

    1992-01-01

    The critical flow tests for the long and short nozzles conducted on the SUPER MOBY-DICK facility were analyzed using the RELAP5-MOD2 and CATHARE 1.3 codes to assess the critical flow models of two codes. The critical mass flux calculated for two nozzles are given. The CATHARE code has used the thermodynamic nonequilibrium sound velocity of the two-phase fluid as the critical flow criterion, and has the better interphase transfer models and calculates the critical flow velocities with the completely implicit solution. Therefore, it can well calculate the critical flowrate and can describe the effect of the geometry L/D on the critical flowrate

  4. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  5. A comparative analysis of predictive models of morbidity in intensive care unit after cardiac surgery – Part I: model planning

    Directory of Open Access Journals (Sweden)

    Biagioli Bonizella

    2007-11-01

    Full Text Available Abstract Background Different methods have recently been proposed for predicting morbidity in intensive care units (ICU. The aim of the present study was to critically review a number of approaches for developing models capable of estimating the probability of morbidity in ICU after heart surgery. The study is divided into two parts. In this first part, popular models used to estimate the probability of class membership are grouped into distinct categories according to their underlying mathematical principles. Modelling techniques and intrinsic strengths and weaknesses of each model are analysed and discussed from a theoretical point of view, in consideration of clinical applications. Methods Models based on Bayes rule, k-nearest neighbour algorithm, logistic regression, scoring systems and artificial neural networks are investigated. Key issues for model design are described. The mathematical treatment of some aspects of model structure is also included for readers interested in developing models, though a full understanding of mathematical relationships is not necessary if the reader is only interested in perceiving the practical meaning of model assumptions, weaknesses and strengths from a user point of view. Results Scoring systems are very attractive due to their simplicity of use, although this may undermine their predictive capacity. Logistic regression models are trustworthy tools, although they suffer from the principal limitations of most regression procedures. Bayesian models seem to be a good compromise between complexity and predictive performance, but model recalibration is generally necessary. k-nearest neighbour may be a valid non parametric technique, though computational cost and the need for large data storage are major weaknesses of this approach. Artificial neural networks have intrinsic advantages with respect to common statistical models, though the training process may be problematical. Conclusion Knowledge of model

  6. Computational neurorehabilitation: modeling plasticity and learning to predict recovery.

    Science.gov (United States)

    Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas

    2016-04-30

    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.

  7. A scoring model for predicting prognosis of patients with severe fever with thrombocytopenia syndrome.

    Directory of Open Access Journals (Sweden)

    Bei Jia

    2017-09-01

    Full Text Available Severe fever with thrombocytopenia syndrome (SFTS is an emerging epidemic infectious disease caused by the SFTS bunyavirus (SFTSV with an estimated high case-fatality rate of 12.7% to 32.6%. Currently, the disease has been reported in mainland China, Japan, Korea, and the United States. At present, there is no specific antiviral therapy for SFTSV infection. Considering the higher mortality rate and rapid clinical progress of SFTS, supporting the appropriate treatment in time to SFTS patients is critical. Therefore, it is very important for clinicians to predict these SFTS cases who are more likely to have a poor prognosis or even more likely to decease. In the present study, we established a simple and feasible model for assessing the severity and predicting the prognosis of SFTS patients with high sensitivity and specificity. This model may aid the physicians to immediately initiate prompt treatment to block the rapid development of the illness and reduce the fatality of SFTS patients.

  8. Implementation of a phenomenological DNB prediction model based on macroscale boiling flow processes in PWR fuel bundles

    International Nuclear Information System (INIS)

    Mohitpour, Maryam; Jahanfarnia, Gholamreza; Shams, Mehrzad

    2014-01-01

    Highlights: • A numerical framework was developed to mechanistically predict DNB in PWR bundles. • The DNB evaluation module was incorporated into the two-phase flow solver module. • Three-dimensional two-fluid model was the basis of two-phase flow solver module. • Liquid sublayer dryout model was adapted as CHF-triggering mechanism in DNB module. • Ability of DNB modeling approach was studied based on PSBT DNB tests in rod bundle. - Abstract: In this study, a numerical framework, comprising of a two-phase flow subchannel solver module and a Departure from Nucleate Boiling (DNB) evaluation module, was developed to mechanistically predict DNB in rod bundles of Pressurized Water Reactor (PWR). In this regard, the liquid sublayer dryout model was adapted as the Critical Heat Flux (CHF) triggering mechanism to reduce the dependency of the model on empirical correlations in the DNB evaluation module. To predict local flow boiling processes, a three-dimensional two-fluid formalism coupled with heat conduction was selected as the basic tool for the development of the two-phase flow subchannel analysis solver. Evaluation of the DNB modeling approach was performed against OECD/NRC NUPEC PWR Bundle tests (PSBT Benchmark) which supplied an extensive database for the development of truly mechanistic and consistent models for boiling transition and CHF. The results of the analyses demonstrated the need for additional assessment of the subcooled boiling model and the bulk condensation model implemented in the two-phase flow solver module. The proposed model slightly under-predicts the DNB power in comparison with the ones obtained from steady-state benchmark measurements. However, this prediction is acceptable compared with other codes. Another point about the DNB prediction model is that it has a conservative behavior. Examination of the axial and radial position of the first detected DNB using code-to-code comparisons on the basis of PSBT data indicated that the our

  9. Possibilities and Limitations of Applying Software Reliability Growth Models to Safety- Critical Software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Jang, Seung Cheol; Ha, Jae Joo

    2006-01-01

    As digital systems are gradually introduced to nuclear power plants (NPPs), the need of quantitatively analyzing the reliability of the digital systems is also increasing. Kang and Sung identified (1) software reliability, (2) common-cause failures (CCFs), and (3) fault coverage as the three most critical factors in the reliability analysis of digital systems. For the estimation of the safety-critical software (the software that is used in safety-critical digital systems), the use of Bayesian Belief Networks (BBNs) seems to be most widely used. The use of BBNs in reliability estimation of safety-critical software is basically a process of indirectly assigning a reliability based on various observed information and experts' opinions. When software testing results or software failure histories are available, we can use a process of directly estimating the reliability of the software using various software reliability growth models such as Jelinski- Moranda model and Goel-Okumoto's nonhomogeneous Poisson process (NHPP) model. Even though it is generally known that software reliability growth models cannot be applied to safety-critical software due to small number of expected failure data from the testing of safety-critical software, we try to find possibilities and corresponding limitations of applying software reliability growth models to safety critical software

  10. Modeling and Control of CSTR using Model based Neural Network Predictive Control

    OpenAIRE

    Shrivastava, Piyush

    2012-01-01

    This paper presents a predictive control strategy based on neural network model of the plant is applied to Continuous Stirred Tank Reactor (CSTR). This system is a highly nonlinear process; therefore, a nonlinear predictive method, e.g., neural network predictive control, can be a better match to govern the system dynamics. In the paper, the NN model and the way in which it can be used to predict the behavior of the CSTR process over a certain prediction horizon are described, and some commen...

  11. Consensus models to predict endocrine disruption for all ...

    Science.gov (United States)

    Humans are potentially exposed to tens of thousands of man-made chemicals in the environment. It is well known that some environmental chemicals mimic natural hormones and thus have the potential to be endocrine disruptors. Most of these environmental chemicals have never been tested for their ability to disrupt the endocrine system, in particular, their ability to interact with the estrogen receptor. EPA needs tools to prioritize thousands of chemicals, for instance in the Endocrine Disruptor Screening Program (EDSP). Collaborative Estrogen Receptor Activity Prediction Project (CERAPP) was intended to be a demonstration of the use of predictive computational models on HTS data including ToxCast and Tox21 assays to prioritize a large chemical universe of 32464 unique structures for one specific molecular target – the estrogen receptor. CERAPP combined multiple computational models for prediction of estrogen receptor activity, and used the predicted results to build a unique consensus model. Models were developed in collaboration between 17 groups in the U.S. and Europe and applied to predict the common set of chemicals. Structure-based techniques such as docking and several QSAR modeling approaches were employed, mostly using a common training set of 1677 compounds provided by U.S. EPA, to build a total of 42 classification models and 8 regression models for binding, agonist and antagonist activity. All predictions were evaluated on ToxCast data and on an exte

  12. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  13. Advance and prospectus of seasonal prediction: assessment of the APCC/CliPAS 14-model ensemble retrospective seasonal prediction (1980-2004)

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Bin; Lee, June-Yi; Fu, X.; Liu, P. [University of Hawaii, Department of Meteorology and International Pacific Research Center, IPRC, School of Ocean and Earth Science and Technology, Honolulu, HI (United States); Kang, In-Sik; Kug, J.S. [Seoul National University, School of Earth and Environmental Sciences, Seoul (Korea); Shukla, J.; Jin, E.K.; Kinter, J.; Kirtman, B. [George Mason University and COLA, Climate Dynamics Program, Calverton, MD (United States); Park, C.K. [APEC Climate Center, Busan (Korea); Kumar, A.; Schemm, J. [Climate Prediction Center/NCEP, Camp Springs, MD (United States); Cocke, S.; Krishnamurti, T. [Florida State University, Tallahassee, FL (United States); Luo, J.J. [Frontier Research Center for Global Chnage, Yokohama (Japan); Zhou, T.; Wang, B. [Chinese Academy of Sciences, LASG/Institute of Atmospheric Physics, Beijing (China); Yun, W.T. [Korean Meteorological Administration, Seoul (Korea); Alves, O. [Bureau of Meteorology Research Center, Melburne (Australia); Lau, N.C.; Rosati, T.; Stern, W. [Princeton University, Geophysical Fluid Dynamics Laboratory/NOAA, Princeton, NJ (United States); Lau, W.; Pegion, P.; Schubert, S.; Suarez, M. [Godard Space Flight Center/NASA, Greenbelt, MD (United States)

    2009-07-15

    We assessed current status of multi-model ensemble (MME) deterministic and probabilistic seasonal prediction based on 25-year (1980-2004) retrospective forecasts performed by 14 climate model systems (7 one-tier and 7 two-tier systems) that participate in the Climate Prediction and its Application to Society (CliPAS) project sponsored by the Asian-Pacific Economic Cooperation Climate Center (APCC). We also evaluated seven DEMETER models' MME for the period of 1981-2001 for comparison. Based on the assessment, future direction for improvement of seasonal prediction is discussed. We found that two measures of probabilistic forecast skill, the Brier Skill Score (BSS) and Area under the Relative Operating Characteristic curve (AROC), display similar spatial patterns as those represented by temporal correlation coefficient (TCC) score of deterministic MME forecast. A TCC score of 0.6 corresponds approximately to a BSS of 0.1 and an AROC of 0.7 and beyond these critical threshold values, they are almost linearly correlated. The MME method is demonstrated to be a valuable approach for reducing errors and quantifying forecast uncertainty due to model formulation. The MME prediction skill is substantially better than the averaged skill of all individual models. For instance, the TCC score of CliPAS one-tier MME forecast of Nino 3.4 index at a 6-month lead initiated from 1 May is 0.77, which is significantly higher than the corresponding averaged skill of seven individual coupled models (0.63). The MME made by using 14 coupled models from both DEMETER and CliPAS shows an even higher TCC score of 0.87. Effectiveness of MME depends on the averaged skill of individual models and their mutual independency. For probabilistic forecast the CliPAS MME gains considerable skill from increased forecast reliability as the number of model being used increases; the forecast resolution also increases for 2 m temperature but slightly decreases for precipitation. Equatorial Sea Surface

  14. Preclinical models used for immunogenicity prediction of therapeutic proteins.

    Science.gov (United States)

    Brinks, Vera; Weinbuch, Daniel; Baker, Matthew; Dean, Yann; Stas, Philippe; Kostense, Stefan; Rup, Bonita; Jiskoot, Wim

    2013-07-01

    All therapeutic proteins are potentially immunogenic. Antibodies formed against these drugs can decrease efficacy, leading to drastically increased therapeutic costs and in rare cases to serious and sometimes life threatening side-effects. Many efforts are therefore undertaken to develop therapeutic proteins with minimal immunogenicity. For this, immunogenicity prediction of candidate drugs during early drug development is essential. Several in silico, in vitro and in vivo models are used to predict immunogenicity of drug leads, to modify potentially immunogenic properties and to continue development of drug candidates with expected low immunogenicity. Despite the extensive use of these predictive models, their actual predictive value varies. Important reasons for this uncertainty are the limited/insufficient knowledge on the immune mechanisms underlying immunogenicity of therapeutic proteins, the fact that different predictive models explore different components of the immune system and the lack of an integrated clinical validation. In this review, we discuss the predictive models in use, summarize aspects of immunogenicity that these models predict and explore the merits and the limitations of each of the models.

  15. A Grey NGM(1,1, k) Self-Memory Coupling Prediction Model for Energy Consumption Prediction

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span. PMID:25054174

  16. Phenomenological modeling of critical heat flux: The GRAMP code and its validation

    International Nuclear Information System (INIS)

    Ahmad, M.; Chandraker, D.K.; Hewitt, G.F.; Vijayan, P.K.; Walker, S.P.

    2013-01-01

    Highlights: ► Assessment of CHF limits is vital for LWR optimization and safety analysis. ► Phenomenological modeling is a valuable adjunct to pure empiricism. ► It is based on empirical representations of the (several, competing) phenomena. ► Phenomenological modeling codes making ‘aggregate’ predictions need careful assessment against experiments. ► The physical and mathematical basis of a phenomenological modeling code GRAMP is presented. ► The GRAMP code is assessed against measurements from BARC (India) and Harwell (UK), and the Look Up Tables. - Abstract: Reliable knowledge of the critical heat flux is vital for the design of light water reactors, for both safety and optimization. The use of wholly empirical correlations, or equivalently “Look Up Tables”, can be very effective, but is generally less so in more complex cases, and in particular cases where the heat flux is axially non-uniform. Phenomenological models are in principle more able to take into account of a wider range of conditions, with a less comprehensive coverage of experimental measurements. These models themselves are in part based upon empirical correlations, albeit of the more fundamental individual phenomena occurring, rather than the aggregate behaviour, and as such they too require experimental validation. In this paper we present the basis of a general-purpose phenomenological code, GRAMP, and then use two independent ‘direct’ sets of measurement, from BARC in India and from Harwell in the United Kingdom, and the large dataset embodied in the Look Up Tables, to perform a validation exercise on it. Very good agreement between predictions and experimental measurements is observed, adding to the confidence with which the phenomenological model can be used. Remaining important uncertainties in the phenomenological modeling of CHF, namely the importance of the initial entrained fraction on entry to annular flow, and the influence of the heat flux on entrainment rate

  17. Critical Comments on the General Model of Instructional Communication

    Science.gov (United States)

    Walton, Justin D.

    2014-01-01

    This essay presents a critical commentary on McCroskey et al.'s (2004) general model of instructional communication. In particular, five points are examined which make explicit and problematize the meta-theoretical assumptions of the model. Comments call attention to the limitations of the model and argue for a broader approach to…

  18. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior...... and predictive inferences under different reasonable choices of prior distribution in sensitivity analysis have been presented....

  19. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  20. Prediction of hourly solar radiation with multi-model framework

    International Nuclear Information System (INIS)

    Wu, Ji; Chan, Chee Keong

    2013-01-01

    Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others

  1. Revised predictive equations for salt intrusion modelling in estuaries

    NARCIS (Netherlands)

    Gisen, J.I.A.; Savenije, H.H.G.; Nijzink, R.C.

    2015-01-01

    For one-dimensional salt intrusion models to be predictive, we need predictive equations to link model parameters to observable hydraulic and geometric variables. The one-dimensional model of Savenije (1993b) made use of predictive equations for the Van der Burgh coefficient $K$ and the dispersion

  2. Preprocedural Prediction Model for Contrast-Induced Nephropathy Patients.

    Science.gov (United States)

    Yin, Wen-Jun; Yi, Yi-Hu; Guan, Xiao-Feng; Zhou, Ling-Yun; Wang, Jiang-Lin; Li, Dai-Yang; Zuo, Xiao-Cong

    2017-02-03

    Several models have been developed for prediction of contrast-induced nephropathy (CIN); however, they only contain patients receiving intra-arterial contrast media for coronary angiographic procedures, which represent a small proportion of all contrast procedures. In addition, most of them evaluate radiological interventional procedure-related variables. So it is necessary for us to develop a model for prediction of CIN before radiological procedures among patients administered contrast media. A total of 8800 patients undergoing contrast administration were randomly assigned in a 4:1 ratio to development and validation data sets. CIN was defined as an increase of 25% and/or 0.5 mg/dL in serum creatinine within 72 hours above the baseline value. Preprocedural clinical variables were used to develop the prediction model from the training data set by the machine learning method of random forest, and 5-fold cross-validation was used to evaluate the prediction accuracies of the model. Finally we tested this model in the validation data set. The incidence of CIN was 13.38%. We built a prediction model with 13 preprocedural variables selected from 83 variables. The model obtained an area under the receiver-operating characteristic (ROC) curve (AUC) of 0.907 and gave prediction accuracy of 80.8%, sensitivity of 82.7%, specificity of 78.8%, and Matthews correlation coefficient of 61.5%. For the first time, 3 new factors are included in the model: the decreased sodium concentration, the INR value, and the preprocedural glucose level. The newly established model shows excellent predictive ability of CIN development and thereby provides preventative measures for CIN. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  3. Time dependent patient no-show predictive modelling development.

    Science.gov (United States)

    Huang, Yu-Li; Hanauer, David A

    2016-05-09

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.

  4. Integration of research infrastructures and ecosystem models toward development of predictive ecology

    Science.gov (United States)

    Luo, Y.; Huang, Y.; Jiang, J.; MA, S.; Saruta, V.; Liang, G.; Hanson, P. J.; Ricciuto, D. M.; Milcu, A.; Roy, J.

    2017-12-01

    The past two decades have witnessed rapid development in sensor technology. Built upon the sensor development, large research infrastructure facilities, such as National Ecological Observatory Network (NEON) and FLUXNET, have been established. Through networking different kinds of sensors and other data collections at many locations all over the world, those facilities generate large volumes of ecological data every day. The big data from those facilities offer an unprecedented opportunity for advancing our understanding of ecological processes, educating teachers and students, supporting decision-making, and testing ecological theory. The big data from the major research infrastructure facilities also provides foundation for developing predictive ecology. Indeed, the capability to predict future changes in our living environment and natural resources is critical to decision making in a world where the past is no longer a clear guide to the future. We are living in a period marked by rapid climate change, profound alteration of biogeochemical cycles, unsustainable depletion of natural resources, and deterioration of air and water quality. Projecting changes in future ecosystem services to the society becomes essential not only for science but also for policy making. We will use this panel format to outline major opportunities and challenges in integrating research infrastructure and ecosystem models toward developing predictive ecology. Meanwhile, we will also show results from an interactive model-experiment System - Ecological Platform for Assimilating Data into models (EcoPAD) - that have been implemented at the Spruce and Peatland Responses Under Climatic and Environmental change (SPRUCE) experiment in Northern Minnesota and Montpellier Ecotron, France. EcoPAD is developed by integrating web technology, eco-informatics, data assimilation techniques, and ecosystem modeling. EcoPAD is designed to streamline data transfer seamlessly from research infrastructure

  5. Penetrator strength effect in long-rod critical ricochet angle

    International Nuclear Information System (INIS)

    Daneshjou, K.; Shahravi, M.

    2008-01-01

    3D numerical simulations were performed in order to further investigate the role of penetrator strength in the interaction of long-rods and oblique targets. Three distinctive regimes resulting from oblique impact depending on the obliquity, namely simple ricochet, critical ricochet and target perforation, were investigated in detail. Critical ricochet angles were calculated with a full 3D explicit finite element method for various impact velocities and strength of target plates and projectiles. Numerical predictions were compared with existing two-dimensional analytical models and test results. It was predicted that critical ricochet angle increases with decreasing impact velocity and that higher ricochet angles were expected if higher strength target materials are employed. But there are differences between analytical models and 3D numerical simulation results or test results. The causes for these discrepancies are established by numerical simulations which explore the validity of the penetrator strength parameter in the analytical model as a physical entity. As a matter of fact, in this paper we first investigate the role of penetrator dynamic strength using two-dimensional simulation which resulted in different penetrator strengths out of different impact velocities. Next, by applying these amounts for penetrator strength in Rosenberg analytical model the critical ricochet angle is calculated. Finally, a comparison between the present analytical method with the 3D simulation and test results shows that the new analytical approach leads to modified results with respect to Rosenberg ones

  6. Model predictive control using fuzzy decision functions

    NARCIS (Netherlands)

    Kaymak, U.; Costa Sousa, da J.M.

    2001-01-01

    Fuzzy predictive control integrates conventional model predictive control with techniques from fuzzy multicriteria decision making, translating the goals and the constraints to predictive control in a transparent way. The information regarding the (fuzzy) goals and the (fuzzy) constraints of the

  7. Critical exponents for the Reggeon quantum spin model

    International Nuclear Information System (INIS)

    Brower, R.C.; Furman, M.A.

    1978-01-01

    The Reggeon quantum spin (RQS) model on the transverse lattice in D dimensional impact parameter space has been conjectured to have the same critical behaviour as the Reggeon field theory (RFT). Thus from a high 'temperature' series of ten (D=2) and twenty (D=1) terms for the RQS model the authors extrapolate to the critical temperature T=Tsub(c) by Pade approximants to obtain the exponents eta=0.238 +- 0.008, z=1.16 +- 0.01, γ=1.271 +- 0.007 for D=2 and eta=0.317 +- 0.002, z=1.272 +- 0.007, γ=1.736 +- 0.001, lambda=0.57 +- 0.03 for D=1. These exponents naturally interpolate between the D=0 and D=4-epsilon results for RFT as expected on the basis of the universality conjecture. (Auth.)

  8. Predicting and Modelling of Survival Data when Cox's Regression Model does not hold

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...

  9. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  10. Computational multi-fluid dynamics predictions of critical heat flux in boiling flow

    Energy Technology Data Exchange (ETDEWEB)

    Mimouni, S., E-mail: stephane.mimouni@edf.fr; Baudry, C.; Guingo, M.; Lavieville, J.; Merigoux, N.; Mechitoua, N.

    2016-04-01

    Highlights: • A new mechanistic model dedicated to DNB has been implemented in the Neptune-CFD code. • The model has been validated against 150 tests. • Neptune-CFD code is a CFD tool dedicated to boiling flows. - Abstract: Extensive efforts have been made in the last five decades to evaluate the boiling heat transfer coefficient and the critical heat flux in particular. Boiling crisis remains a major limiting phenomenon for the analysis of operation and safety of both nuclear reactors and conventional thermal power systems. As a consequence, models dedicated to boiling flows have being improved. For example, Reynolds Stress Transport Model, polydispersion and two-phase flow wall law have been recently implemented. In a previous work, we have evaluated computational fluid dynamics results against single-phase liquid water tests equipped with a mixing vane and against two-phase boiling cases. The objective of this paper is to propose a new mechanistic model in a computational multi-fluid dynamics tool leading to wall temperature excursion and onset of boiling crisis. Critical heat flux is calculated against 150 tests and the mean relative error between calculations and experimental values is equal to 8.3%. The model tested covers a large physics scope in terms of mass flux, pressure, quality and channel diameter. Water and R12 refrigerant fluid are considered. Furthermore, it was found that the sensitivity to the grid refinement was acceptable.

  11. Computational multi-fluid dynamics predictions of critical heat flux in boiling flow

    International Nuclear Information System (INIS)

    Mimouni, S.; Baudry, C.; Guingo, M.; Lavieville, J.; Merigoux, N.; Mechitoua, N.

    2016-01-01

    Highlights: • A new mechanistic model dedicated to DNB has been implemented in the Neptune_CFD code. • The model has been validated against 150 tests. • Neptune_CFD code is a CFD tool dedicated to boiling flows. - Abstract: Extensive efforts have been made in the last five decades to evaluate the boiling heat transfer coefficient and the critical heat flux in particular. Boiling crisis remains a major limiting phenomenon for the analysis of operation and safety of both nuclear reactors and conventional thermal power systems. As a consequence, models dedicated to boiling flows have being improved. For example, Reynolds Stress Transport Model, polydispersion and two-phase flow wall law have been recently implemented. In a previous work, we have evaluated computational fluid dynamics results against single-phase liquid water tests equipped with a mixing vane and against two-phase boiling cases. The objective of this paper is to propose a new mechanistic model in a computational multi-fluid dynamics tool leading to wall temperature excursion and onset of boiling crisis. Critical heat flux is calculated against 150 tests and the mean relative error between calculations and experimental values is equal to 8.3%. The model tested covers a large physics scope in terms of mass flux, pressure, quality and channel diameter. Water and R12 refrigerant fluid are considered. Furthermore, it was found that the sensitivity to the grid refinement was acceptable.

  12. On the critical frontiers of Potts ferromagnets

    International Nuclear Information System (INIS)

    Magalhaes, A.C.N. de; Tsallis, C.

    1981-01-01

    A conjecture concerning the critical frontiers of q- state Potts ferromagnets on d- dimensional lattices (d > 1) which generalize a recent one stated for planar lattices is formulated. The present conjecture is verified within satisfactory accuracy (exactly in some cases) for all the lattices or arrays whose critical points are known. Its use leads to the prediction of: a) a considerable amount of new approximate critical points (26 on non-planar regular lattices, some others on Husimi trees and cacti); b) approximate critical frontiers for some 3- dimensional lattices; c) the possibly asymptotically exact critical point on regular lattices in the limit d→infinite for all q>=1; d) the possibly exact critical frontier for the pure Potts model on fully anisotropic Bethe lattices; e) the possibly exact critical frontier for the general quenched random-bond Potts ferromagnet (any P(J)) on isotropic Bethe lattices. (Author) [pt

  13. FORMULASI TEPUNG PENYALUT BERBASIS TEPUNG JAGUNG DAN PENENTUAN UMUR SIMPANNYA DENGAN PENDEKATAN KADAR AIR KRITIS [Formulation of Corn Flour-Based Batter and Prediction of Its Shelf Life using Critical Moisture Approach

    Directory of Open Access Journals (Sweden)

    Sugiyono1*

    2010-12-01

    Full Text Available The objectives of this study were to obtain the best formula for corn flour-based batter and to predict its shelf life using critical moisture approach. According to a hedonic test, the best batter formula was composed of 60% corn flour, 12.5% rice flour, 12.5% tapioca starch, and 15% glutinous rice flour. Addition of glutinous rice flour in the formula changed the proportion of amylose and amylopectin in the batter. As a result, the retrogradation of the batter decreased and the texture of its fried product was preferred. A critical moisture approach was used to predict the shelf life of the batter. The critical moisture content of the batter was 0.16 g H2O/g solid.The isotherm sorption phenomenon of the batter was best described using Hasley model. The shelf life of the product was 7 months when packaged in polypropylene (0,07 g/m2day.mmHg at 85% RH.

  14. Wind gust estimation by combining numerical weather prediction model and statistical post-processing

    Science.gov (United States)

    Patlakas, Platon; Drakaki, Eleni; Galanis, George; Spyrou, Christos; Kallos, George

    2017-04-01

    The continuous rise of off-shore and near-shore activities as well as the development of structures, such as wind farms and various offshore platforms, requires the employment of state-of-the-art risk assessment techniques. Such analysis is used to set the safety standards and can be characterized as a climatologically oriented approach. Nevertheless, a reliable operational support is also needed in order to minimize cost drawbacks and human danger during the construction and the functioning stage as well as during maintenance activities. One of the most important parameters for this kind of analysis is the wind speed intensity and variability. A critical measure associated with this variability is the presence and magnitude of wind gusts as estimated in the reference level of 10m. The latter can be attributed to different processes that vary among boundary-layer turbulence, convection activities, mountain waves and wake phenomena. The purpose of this work is the development of a wind gust forecasting methodology combining a Numerical Weather Prediction model and a dynamical statistical tool based on Kalman filtering. To this end, the parameterization of Wind Gust Estimate method was implemented to function within the framework of the atmospheric model SKIRON/Dust. The new modeling tool combines the atmospheric model with a statistical local adaptation methodology based on Kalman filters. This has been tested over the offshore west coastline of the United States. The main purpose is to provide a useful tool for wind analysis and prediction and applications related to offshore wind energy (power prediction, operation and maintenance). The results have been evaluated by using observational data from the NOAA's buoy network. As it was found, the predicted output shows a good behavior that is further improved after the local adjustment post-process.

  15. Development and validation of a multilevel model for predicting workload under routine and nonroutine conditions in an air traffic management center.

    Science.gov (United States)

    Neal, Andrew; Hannah, Sam; Sanderson, Penelope; Bolland, Scott; Mooij, Martijn; Murphy, Sean

    2014-03-01

    The aim of this study was to develop a model capable of predicting variability in the mental workload experienced by frontline operators under routine and nonroutine conditions. Excess workload is a risk that needs to be managed in safety-critical industries. Predictive models are needed to manage this risk effectively yet are difficult to develop. Much of the difficulty stems from the fact that workload prediction is a multilevel problem. A multilevel workload model was developed in Study I with data collected from an en route air traffic management center. Dynamic density metrics were used to predict variability in workload within and between work units while controlling for variability among raters.The model was cross-validated in Studies 2 and 3 with the use of a high-fidelity simulator. Reported workload generally remained within the bounds of the 90% prediction interval in Studies 2 and 3. Workload crossed the upper bound of the prediction interval only under nonroutine conditions. Qualitative analyses suggest that nonroutine events caused workload to cross the upper bound of the prediction interval because the controllers could not manage their workload strategically. The model performed well under both routine and nonroutine conditions and over different patterns of workload variation. Workload prediction models can be used to support both strategic and tactical workload management. Strategic uses include the analysis of historical and projected workflows and the assessment of staffing needs.Tactical uses include the dynamic reallocation of resources to meet changes in demand.

  16. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  17. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  18. Can Process Understanding Help Elucidate The Structure Of The Critical Zone? Comparing Process-Based Soil Formation Models With Digital Soil Mapping.

    Science.gov (United States)

    Vanwalleghem, T.; Román, A.; Peña, A.; Laguna, A.; Giráldez, J. V.

    2017-12-01

    There is a need for better understanding the processes influencing soil formation and the resulting distribution of soil properties in the critical zone. Soil properties can exhibit strong spatial variation, even at the small catchment scale. Especially soil carbon pools in semi-arid, mountainous areas are highly uncertain because bulk density and stoniness are very heterogeneous and rarely measured explicitly. In this study, we explore the spatial variability in key soil properties (soil carbon stocks, stoniness, bulk density and soil depth) as a function of processes shaping the critical zone (weathering, erosion, soil water fluxes and vegetation patterns). We also compare the potential of traditional digital soil mapping versus a mechanistic soil formation model (MILESD) for predicting these key soil properties. Soil core samples were collected from 67 locations at 6 depths. Total soil organic carbon stocks were 4.38 kg m-2. Solar radiation proved to be the key variable controlling soil carbon distribution. Stone content was mostly controlled by slope, indicating the importance of erosion. Spatial distribution of bulk density was found to be highly random. Finally, total carbon stocks were predicted using a random forest model whose main covariates were solar radiation and NDVI. The model predicts carbon stocks that are double as high on north versus south-facing slopes. However, validation showed that these covariates only explained 25% of the variation in the dataset. Apparently, present-day landscape and vegetation properties are not sufficient to fully explain variability in the soil carbon stocks in this complex terrain under natural vegetation. This is attributed to a high spatial variability in bulk density and stoniness, key variables controlling carbon stocks. Similar results were obtained with the mechanistic soil formation model MILESD, suggesting that more complex models might be needed to further explore this high spatial variability.

  19. A Combined High and Low Cycle Fatigue Model for Life Prediction of Turbine Blades

    Directory of Open Access Journals (Sweden)

    Shun-Peng Zhu

    2017-06-01

    Full Text Available Combined high and low cycle fatigue (CCF generally induces the failure of aircraft gas turbine attachments. Based on the aero-engine load spectrum, accurate assessment of fatigue damage due to the interaction of high cycle fatigue (HCF resulting from high frequency vibrations and low cycle fatigue (LCF from ground-air-ground engine cycles is of critical importance for ensuring structural integrity of engine components, like turbine blades. In this paper, the influence of combined damage accumulation on the expected CCF life are investigated for turbine blades. The CCF behavior of a turbine blade is usually studied by testing with four load-controlled parameters, including high cycle stress amplitude and frequency, and low cycle stress amplitude and frequency. According to this, a new damage accumulation model is proposed based on Miner’s rule to consider the coupled damage due to HCF-LCF interaction by introducing the four load parameters. Five experimental datasets of turbine blade alloys and turbine blades were introduced for model validation and comparison between the proposed Miner, Manson-Halford, and Trufyakov-Kovalchuk models. Results show that the proposed model provides more accurate predictions than others with lower mean and standard deviation values of model prediction errors.

  20. Quantifying and modelling the carbon sequestration capacity of seagrass meadows--a critical assessment.

    Science.gov (United States)

    Macreadie, P I; Baird, M E; Trevathan-Tackett, S M; Larkum, A W D; Ralph, P J

    2014-06-30

    Seagrasses are among the planet's most effective natural ecosystems for sequestering (capturing and storing) carbon (C); but if degraded, they could leak stored C into the atmosphere and accelerate global warming. Quantifying and modelling the C sequestration capacity is therefore critical for successfully managing seagrass ecosystems to maintain their substantial abatement potential. At present, there is no mechanism to support carbon financing linked to seagrass. For seagrasses to be recognised by the IPCC and the voluntary C market, standard stock assessment methodologies and inventories of seagrass C stocks are required. Developing accurate C budgets for seagrass meadows is indeed complex; we discuss these complexities, and, in addition, we review techniques and methodologies that will aid development of C budgets. We also consider a simple process-based data assimilation model for predicting how seagrasses will respond to future change, accompanied by a practical list of research priorities. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  2. Reuse-centric Requirements Analysis with Task Models, Scenarios, and Critical Parameters

    Directory of Open Access Journals (Sweden)

    Cyril Montabert

    2007-02-01

    Full Text Available This paper outlines a requirements-analysis process that unites task models, scenarios, and critical parameters to exploit and generate reusable knowledge at the requirements phase. Through the deployment of a critical-parameter-based approach to task modeling, the process yields the establishment of an integrative and formalized model issued from scenarios that can be used for requirements characterization. Furthermore, not only can this entity serve as interface to a knowledge repository relying on a critical-parameter-based taxonomy to support reuse but its characterization in terms of critical parameters also allows the model to constitute a broader reuse solution. We discuss our vision for a user-centric and reuse-centric approach to requirements analysis, present previous efforts implicated with this line of work, and state the revisions brought to extend the reuse potential and effectiveness of a previous iteration of a requirements tool implementing such process. Finally, the paper describes the sequence and nature of the activities involved with the conduct of our proposed requirements-analysis technique, concluding by previewing ongoing work in the field that will explore the feasibility for designers to use our approach.

  3. How Adverse Outcome Pathways Can Aid the Development and Use of Computational Prediction Models for Regulatory Toxicology.

    Science.gov (United States)

    Wittwehr, Clemens; Aladjov, Hristo; Ankley, Gerald; Byrne, Hugh J; de Knecht, Joop; Heinzle, Elmar; Klambauer, Günter; Landesmann, Brigitte; Luijten, Mirjam; MacKay, Cameron; Maxwell, Gavin; Meek, M E Bette; Paini, Alicia; Perkins, Edward; Sobanski, Tomasz; Villeneuve, Dan; Waters, Katrina M; Whelan, Maurice

    2017-02-01

    Efforts are underway to transform regulatory toxicology and chemical safety assessment from a largely empirical science based on direct observation of apical toxicity outcomes in whole organism toxicity tests to a predictive one in which outcomes and risk are inferred from accumulated mechanistic understanding. The adverse outcome pathway (AOP) framework provides a systematic approach for organizing knowledge that may support such inference. Likewise, computational models of biological systems at various scales provide another means and platform to integrate current biological understanding to facilitate inference and extrapolation. We argue that the systematic organization of knowledge into AOP frameworks can inform and help direct the design and development of computational prediction models that can further enhance the utility of mechanistic and in silico data for chemical safety assessment. This concept was explored as part of a workshop on AOP-Informed Predictive Modeling Approaches for Regulatory Toxicology held September 24-25, 2015. Examples of AOP-informed model development and its application to the assessment of chemicals for skin sensitization and multiple modes of endocrine disruption are provided. The role of problem formulation, not only as a critical phase of risk assessment, but also as guide for both AOP and complementary model development is described. Finally, a proposal for actively engaging the modeling community in AOP-informed computational model development is made. The contents serve as a vision for how AOPs can be leveraged to facilitate development of computational prediction models needed to support the next generation of chemical safety assessment. © The Author 2016. Published by Oxford University Press on behalf of the Society of Toxicology.

  4. The liquid–liquid coexistence curves of {benzonitrile + n-pentadecane} and {benzonitrile + n-heptadecane} in the critical region

    International Nuclear Information System (INIS)

    Chen, Zhiyun; Bai, Yongliang; Yin, Tianxiang; An, Xueqin; Shen, Weiguo

    2012-01-01

    Highlights: ► Coexistence curves of (benzonitrile + n-pentadecane) and (benzonitrile + n-heptadecane) were measured. ► The values of the critical exponent β are consistent with that predicted by the 3D-Ising model. ► The coexistence curves are well described by the critical crossover model. ► The asymmetry of the diameters of the coexistence curves were discussed by the complete scaling theory. - Abstract: Liquid + liquid coexistence curves for the binary solutions of {benzonitrile + n-pentadecane} and {benzonitrile + n-heptadecane} have been measured in the critical region. The critical exponent β and the critical amplitudes have been deduced and the former is consistent with the theoretic prediction. It was found that the coexistence curves may be well described by the crossover model proposed by Gutkowski et al. The asymmetries of the diameters of the coexistence curves were also discussed in the frame of the complete scaling theory.

  5. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  6. Improving predictions of tropical forest response to climate change through integration of field studies and ecosystem modeling

    Science.gov (United States)

    Feng, Xiaohui; Uriarte, María; González, Grizelle; Reed, Sasha C.; Thompson, Jill; Zimmerman, Jess K.; Murphy, Lora

    2018-01-01

    Tropical forests play a critical role in carbon and water cycles at a global scale. Rapid climate change is anticipated in tropical regions over the coming decades and, under a warmer and drier climate, tropical forests are likely to be net sources of carbon rather than sinks. However, our understanding of tropical forest response and feedback to climate change is very limited. Efforts to model climate change impacts on carbon fluxes in tropical forests have not reached a consensus. Here we use the Ecosystem Demography model (ED2) to predict carbon fluxes of a Puerto Rican tropical forest under realistic climate change scenarios. We parameterized ED2 with species-specific tree physiological data using the Predictive Ecosystem Analyzer workflow and projected the fate of this ecosystem under five future climate scenarios. The model successfully captured inter-annual variability in the dynamics of this tropical forest. Model predictions closely followed observed values across a wide range of metrics including above-ground biomass, tree diameter growth, tree size class distributions, and leaf area index. Under a future warming and drying climate scenario, the model predicted reductions in carbon storage and tree growth, together with large shifts in forest community composition and structure. Such rapid changes in climate led the forest to transition from a sink to a source of carbon. Growth respiration and root allocation parameters were responsible for the highest fraction of predictive uncertainty in modeled biomass, highlighting the need to target these processes in future data collection. Our study is the first effort to rely on Bayesian model calibration and synthesis to elucidate the key physiological parameters that drive uncertainty in tropical forests responses to climatic change. We propose a new path forward for model-data synthesis that can substantially reduce uncertainty in our ability to model tropical forest responses to future climate.

  7. Modeling chiral criticality and its consequences for heavy-ion collisions

    Energy Technology Data Exchange (ETDEWEB)

    Almási, Gábor András, E-mail: g.almasi@gsi.de [Gesellschaft für Schwerionenforschung, GSI, D-64291 Darmstadt (Germany); Friman, Bengt, E-mail: b.friman@gsi.de [Gesellschaft für Schwerionenforschung, GSI, D-64291 Darmstadt (Germany); ExtreMe Matter Institute (EMMI), D-64291 Darmstadt (Germany); Redlich, Krzysztof, E-mail: krzysztof.redlich@ift.uni.wroc.pl [ExtreMe Matter Institute (EMMI), D-64291 Darmstadt (Germany); University of Wrocław - Faculty of Physics and Astronomy, PL-50-204 Wrocław (Poland); Department of Physics, Duke University, Durham, NC 27708 (United States)

    2016-12-15

    We explore the critical fluctuations near the chiral critical endpoint (CEP) in a chiral effective model and discuss possible signals of the CEP, recently explored experimentally in nuclear collision. Particular attention is paid to the dependence of such signals on the location of the phase boundary and the CEP relative to the chemical freeze-out conditions in nuclear collisions. We argue that in effective models, standard freeze-out fits to heavy-ion data should not be used directly. Instead, the relevant quantities should be examined on lines in the phase diagram that are defined self-consistently, within the framework of the model. We discuss possible choices for such an approach.

  8. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  9. A New Multiaxial High-Cycle Fatigue Criterion Based on the Critical Plane for Ductile and Brittle Materials

    Science.gov (United States)

    Wang, Cong; Shang, De-Guang; Wang, Xiao-Wei

    2015-02-01

    An improved high-cycle multiaxial fatigue criterion based on the critical plane was proposed in this paper. The critical plane was defined as the plane of maximum shear stress (MSS) in the proposed multiaxial fatigue criterion, which is different from the traditional critical plane based on the MSS amplitude. The proposed criterion was extended as a fatigue life prediction model that can be applicable for ductile and brittle materials. The fatigue life prediction model based on the proposed high-cycle multiaxial fatigue criterion was validated with experimental results obtained from the test of 7075-T651 aluminum alloy and some references.

  10. Using Pareto points for model identification in predictive toxicology

    Science.gov (United States)

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  11. Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction

    Science.gov (United States)

    Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro

    Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.

  12. Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan.

    Science.gov (United States)

    Tsai, Kuang-Jung; Chiang, Jie-Lun; Lee, Ming-Hsi; Chen, Yie-Ruey

    2017-04-01

    Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan. Kuang-Jung Tsai 1, Jie-Lun Chiang 2,Ming-Hsi Lee 2, Yie-Ruey Chen 1, 1Department of Land Management and Development, Chang Jung Christian Universityt, Tainan, Taiwan. 2Department of Soil and Water Conservation, National Pingtung University of Science and Technology, Pingtung, Taiwan. ABSTRACT The accumulated rainfall amount was recorded more than 2,900mm that were brought by Morakot typhoon in August, 2009 within continuous 3 days. Very serious landslides, and sediment related disasters were induced by this heavy rainfall event. The satellite image analysis project conducted by Soil and Water Conservation Bureau after Morakot event indicated that more than 10,904 sites of landslide with total sliding area of 18,113ha were found by this project. At the same time, all severe sediment related disaster areas are also characterized based on their disaster type, scale, topography, major bedrock formations and geologic structures during the period of extremely heavy rainfall events occurred at the southern Taiwan. Characteristics and mechanism of large scale landslide are collected on the basis of the field investigation technology integrated with GPS/GIS/RS technique. In order to decrease the risk of large scale landslides on slope land, the strategy of slope land conservation, and critical rainfall database should be set up and executed as soon as possible. Meanwhile, study on the establishment of critical rainfall value used for predicting large scale landslides induced by heavy rainfall become an important issue which was seriously concerned by the government and all people live in Taiwan. The mechanism of large scale landslide, rainfall frequency analysis ,sediment budge estimation and river hydraulic analysis under the condition of extremely climate change during the past 10 years would be seriously concerned and recognized as a required issue by this

  13. Scaling, phase transitions, and nonuniversality in a self-organized critical cellular-automaton model

    International Nuclear Information System (INIS)

    Christensen, K.; Olami, Z.

    1992-01-01

    We present a two-dimensional continuous cellular automaton that is equivalent to a driven spring-block model. Both the conservation and the anisotropy in the model are controllable quantities. Above a critical level of conservation, the model exhibits self-organized criticality. The self-organization of this system and hence the critical exponents depend on the conservation and the boundary conditions. In the critical isotropic nonconservative phase, the exponents change continuously as a function of conservation. Furthermore, the exponents vary continuously when changing the boundary conditions smoothly. Consequently, there is no universality of the critical exponents. We discuss the relevance of this for earthquakes. Introducing anisotropy changes the scaling of the distribution function, but not the power-law exponent. We explore the phase diagram of this model. We find that at low conservation levels a localization transition occurs. We see two additional phase transitions. The first is seen when moving from the conservative into the nonconservative model. The second appears when passing from the anisotropic two-dimensional system to the purely one-dimensional system

  14. Dynamical Response near Quantum Critical Points.

    Science.gov (United States)

    Lucas, Andrew; Gazit, Snir; Podolsky, Daniel; Witczak-Krempa, William

    2017-02-03

    We study high-frequency response functions, notably the optical conductivity, in the vicinity of quantum critical points (QCPs) by allowing for both detuning from the critical coupling and finite temperature. We consider general dimensions and dynamical exponents. This leads to a unified understanding of sum rules. In systems with emergent Lorentz invariance, powerful methods from quantum field theory allow us to fix the high-frequency response in terms of universal coefficients. We test our predictions analytically in the large-N O(N) model and using the gauge-gravity duality and numerically via quantum Monte Carlo simulations on a lattice model hosting the interacting superfluid-insulator QCP. In superfluid phases, interacting Goldstone bosons qualitatively change the high-frequency optical conductivity and the corresponding sum rule.

  15. Higgs inflation at the critical point

    CERN Document Server

    Bezrukov, Fedor

    2014-01-01

    Higgs inflation can occur if the Standard Model (SM) is a self-consistent effective field theory up to inflationary scale. This leads to a lower bound on the Higgs boson mass, $M_h \\geq M_{\\text{crit}}$. If $M_h$ is more than a few hundreds of MeV above the critical value, the Higgs inflation predicts the universal values of inflationary indexes, $r\\simeq 0.003$ and $n_s\\simeq 0.97$, independently on the Standard Model parameters. We show that in the vicinity of the critical point $M_{\\text{crit}}$ the inflationary indexes acquire an essential dependence on the mass of the top quark $m_t$ and $M_h$. In particular, the amplitude of the gravitational waves can exceed considerably the universal value.

  16. Critical discharge of initially subcooled water through slits. [PWR; BWR

    Energy Technology Data Exchange (ETDEWEB)

    Amos, C N; Schrock, V E

    1983-09-01

    This report describes an experimental investigation into the critical flow of initially subcooled water through rectangular slits. The study of such flows is relevant to the prediction of leak flow rates from cracks in piping, or pressure vessels, which contain sufficient enthalpy that vaporization will occur if they are allowed to expand to the ambient pressure. Two new analytical models, which allow for the generation of a metastable liquid phase, are developed. Experimental results are compared with the predictions of both these new models and with a Fanno Homogeneous Equilibrium Model.

  17. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A; Giebel, G; Landberg, L [Risoe National Lab., Roskilde (Denmark); Madsen, H; Nielsen, H A [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  18. Predictive models of glucose control: roles for glucose-sensing neurones

    Science.gov (United States)

    Kosse, C.; Gonzalez, A.; Burdakov, D.

    2018-01-01

    The brain can be viewed as a sophisticated control module for stabilizing blood glucose. A review of classical behavioural evidence indicates that central circuits add predictive (feedforward/anticipatory) control to the reactive (feedback/compensatory) control by peripheral organs. The brain/cephalic control is constructed and engaged, via associative learning, by sensory cues predicting energy intake or expenditure (e.g. sight, smell, taste, sound). This allows rapidly measurable sensory information (rather than slowly generated internal feedback signals, e.g. digested nutrients) to control food selection, glucose supply for fight-or-flight responses or preparedness for digestion/absorption. Predictive control is therefore useful for preventing large glucose fluctuations. We review emerging roles in predictive control of two classes of widely projecting hypothalamic neurones, orexin/hypocretin (ORX) and melanin-concentrating hormone (MCH) cells. Evidence is cited that ORX neurones (i) are activated by sensory cues (e.g. taste, sound), (ii) drive hepatic production, and muscle uptake, of glucose, via sympathetic nerves, (iii) stimulate wakefulness and exploration via global brain projections and (iv) are glucose-inhibited. MCH neurones are (i) glucose-excited, (ii) innervate learning and reward centres to promote synaptic plasticity, learning and memory and (iii) are critical for learning associations useful for predictive control (e.g. using taste to predict nutrient value of food). This evidence is unified into a model for predictive glucose control. During associative learning, inputs from some glucose-excited neurones may promote connections between the ‘fast’ senses and reward circuits, constructing neural shortcuts for efficient action selection. In turn, glucose-inhibited neurones may engage locomotion/exploration and coordinate the required fuel supply. Feedback inhibition of the latter neurones by glucose would ensure that glucose fluxes they

  19. Predictive models of glucose control: roles for glucose-sensing neurones.

    Science.gov (United States)

    Kosse, C; Gonzalez, A; Burdakov, D

    2015-01-01

    The brain can be viewed as a sophisticated control module for stabilizing blood glucose. A review of classical behavioural evidence indicates that central circuits add predictive (feedforward/anticipatory) control to the reactive (feedback/compensatory) control by peripheral organs. The brain/cephalic control is constructed and engaged, via associative learning, by sensory cues predicting energy intake or expenditure (e.g. sight, smell, taste, sound). This allows rapidly measurable sensory information (rather than slowly generated internal feedback signals, e.g. digested nutrients) to control food selection, glucose supply for fight-or-flight responses or preparedness for digestion/absorption. Predictive control is therefore useful for preventing large glucose fluctuations. We review emerging roles in predictive control of two classes of widely projecting hypothalamic neurones, orexin/hypocretin (ORX) and melanin-concentrating hormone (MCH) cells. Evidence is cited that ORX neurones (i) are activated by sensory cues (e.g. taste, sound), (ii) drive hepatic production, and muscle uptake, of glucose, via sympathetic nerves, (iii) stimulate wakefulness and exploration via global brain projections and (iv) are glucose-inhibited. MCH neurones are (i) glucose-excited, (ii) innervate learning and reward centres to promote synaptic plasticity, learning and memory and (iii) are critical for learning associations useful for predictive control (e.g. using taste to predict nutrient value of food). This evidence is unified into a model for predictive glucose control. During associative learning, inputs from some glucose-excited neurones may promote connections between the 'fast' senses and reward circuits, constructing neural shortcuts for efficient action selection. In turn, glucose-inhibited neurones may engage locomotion/exploration and coordinate the required fuel supply. Feedback inhibition of the latter neurones by glucose would ensure that glucose fluxes they stimulate

  20. Development of a prediction model for the cost saving potentials in implementing the building energy efficiency rating certification

    International Nuclear Information System (INIS)

    Jeong, Jaewook; Hong, Taehoon; Ji, Changyoon; Kim, Jimin; Lee, Minhyun; Jeong, Kwangbok; Koo, Choongwan

    2017-01-01

    Highlights: • This study evaluates the building energy efficiency rating (BEER) certification. • Prediction model was developed for cost saving potentials by the BEER certification. • Prediction model was developed using LCC analysis, ROV, and Monte Carlo simulation. • Cost saving potential was predicted to be 2.78–3.77% of the construction cost. • Cost saving potential can be used for estimating the investment value of BEER. - Abstract: Building energy efficiency rating (BEER) certification is an energy performance certificates (EPCs) in South Korea. It is critical to examine the cost saving potentials of the BEER-certification in advance. This study aimed to develop a prediction model for the cost saving potentials in implementing the BEER-certification, in which the cost saving potentials included the energy cost savings of the BEER-certification and the relevant CO_2 emissions reduction as well as the additional construction cost for the BEER-certification. The prediction model was developed by using data mining, life cycle cost analysis, real option valuation, and Monte Carlo simulation. The database were established with 437 multi-family housing complexes (MFHCs), including 116 BEER-certified MFHCs and 321 non-certified MFHCs. The case study was conducted to validate the developed prediction model using 321 non-certified MFHCs, which considered 20-year life cycle. As a result, compared to the additional construction cost, the average cost saving potentials of the 1st-BEER-certified MFHCs in Groups 1, 2, and 3 were predicted to be 3.77%, 2.78%, and 2.87%, respectively. The cost saving potentials can be used as a guideline for the additional construction cost of the BEER-certification in the early design phase.