WorldWideScience

Sample records for model predicts critical

  1. Critical exponents predicted by grouping of Feynman diagrams in φ4 model

    International Nuclear Information System (INIS)

    Kaupuzs, J.

    2001-01-01

    Different perturbation theory treatments of the Ginzburg-Landau phase transition model are discussed. This includes a criticism of the perturbative renormalization group (RG) approach and a proposal of a novel method providing critical exponents consistent with the known exact solutions in two dimensions. The usual perturbation theory is reorganized by appropriate grouping of Feynman diagrams of φ 4 model with O(n) symmetry. As a result, equations for calculation of the two-point correlation function are obtained which allow to predict possible exact values of critical exponents in two and three dimensions by proving relevant scaling properties of the asymptotic solution at (and near) the criticality. The new values of critical exponents are discussed and compared to the results of numerical simulations and experiments. (orig.)

  2. A new risk prediction model for critical care: the Intensive Care National Audit & Research Centre (ICNARC) model.

    Science.gov (United States)

    Harrison, David A; Parry, Gareth J; Carpenter, James R; Short, Alasdair; Rowan, Kathy

    2007-04-01

    To develop a new model to improve risk prediction for admissions to adult critical care units in the UK. Prospective cohort study. The setting was 163 adult, general critical care units in England, Wales, and Northern Ireland, December 1995 to August 2003. Patients were 216,626 critical care admissions. None. The performance of different approaches to modeling physiologic measurements was evaluated, and the best methods were selected to produce a new physiology score. This physiology score was combined with other information relating to the critical care admission-age, diagnostic category, source of admission, and cardiopulmonary resuscitation before admission-to develop a risk prediction model. Modeling interactions between diagnostic category and physiology score enabled the inclusion of groups of admissions that are frequently excluded from risk prediction models. The new model showed good discrimination (mean c index 0.870) and fit (mean Shapiro's R 0.665, mean Brier's score 0.132) in 200 repeated validation samples and performed well when compared with recalibrated versions of existing published risk prediction models in the cohort of patients eligible for all models. The hypothesis of perfect fit was rejected for all models, including the Intensive Care National Audit & Research Centre (ICNARC) model, as is to be expected in such a large cohort. The ICNARC model demonstrated better discrimination and overall fit than existing risk prediction models, even following recalibration of these models. We recommend it be used to replace previously published models for risk adjustment in the UK.

  3. Method of critical power prediction based on film flow model coupled with subchannel analysis

    International Nuclear Information System (INIS)

    Tomiyama, Akio; Yokomizo, Osamu; Yoshimoto, Yuichiro; Sugawara, Satoshi.

    1988-01-01

    A new method was developed to predict critical powers for a wide variety of BWR fuel bundle designs. This method couples subchannel analysis with a liquid film flow model, instead of taking the conventional way which couples subchannel analysis with critical heat flux correlations. Flow and quality distributions in a bundle are estimated by the subchannel analysis. Using these distributions, film flow rates along fuel rods are then calculated with the film flow model. Dryout is assumed to occur where one of the film flows disappears. This method is expected to give much better adaptability to variations in geometry, heat flux, flow rate and quality distributions than the conventional methods. In order to verify the method, critical power data under BWR conditions were analyzed. Measured and calculated critical powers agreed to within ±7%. Furthermore critical power data for a tight-latticed bundle obtained by LeTourneau et al. were compared with critical powers calculated by the present method and two conventional methods, CISE correlation and subchannel analysis coupled with the CISE correlation. It was confirmed that the present method can predict critical powers more accurately than the conventional methods. (author)

  4. A Critical Plane-energy Model for Multiaxial Fatigue Life Prediction of Homogeneous and Heterogeneous Materials

    Science.gov (United States)

    Wei, Haoyang

    A new critical plane-energy model is proposed in this thesis for multiaxial fatigue life prediction of homogeneous and heterogeneous materials. Brief review of existing methods, especially on the critical plane-based and energy-based methods, are given first. Special focus is on one critical plane approach which has been shown to work for both brittle and ductile metals. The key idea is to automatically change the critical plane orientation with respect to different materials and stress states. One potential drawback of the developed model is that it needs an empirical calibration parameter for non-proportional multiaxial loadings since only the strain terms are used and the out-of-phase hardening cannot be considered. The energy-based model using the critical plane concept is proposed with help of the Mroz-Garud hardening rule to explicitly include the effect of non-proportional hardening under fatigue cyclic loadings. Thus, the empirical calibration for non-proportional loading is not needed since the out-of-phase hardening is naturally included in the stress calculation. The model predictions are compared with experimental data from open literature and it is shown the proposed model can work for both proportional and non-proportional loadings without the empirical calibration. Next, the model is extended for the fatigue analysis of heterogeneous materials integrating with finite element method. Fatigue crack initiation of representative volume of heterogeneous materials is analyzed using the developed critical plane-energy model and special focus is on the microstructure effect on the multiaxial fatigue life predictions. Several conclusions and future work is drawn based on the proposed study.

  5. Nuclear criticality predictability

    International Nuclear Information System (INIS)

    Briggs, J.B.

    1999-01-01

    As a result of lots of efforts, a large portion of the tedious and redundant research and processing of critical experiment data has been eliminated. The necessary step in criticality safety analyses of validating computer codes with benchmark critical data is greatly streamlined, and valuable criticality safety experimental data is preserved. Criticality safety personnel in 31 different countries are now using the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'. Much has been accomplished by the work of the ICSBEP. However, evaluation and documentation represents only one element of a successful Nuclear Criticality Safety Predictability Program and this element only exists as a separate entity, because this work was not completed in conjunction with the experimentation process. I believe; however, that the work of the ICSBEP has also served to unify the other elements of nuclear criticality predictability. All elements are interrelated, but for a time it seemed that communications between these elements was not adequate. The ICSBEP has highlighted gaps in data, has retrieved lost data, has helped to identify errors in cross section processing codes, and has helped bring the international criticality safety community together in a common cause as true friends and colleagues. It has been a privilege to associate with those who work so diligently to make the project a success. (J.P.N.)

  6. Criticality Model

    International Nuclear Information System (INIS)

    Alsaed, A.

    2004-01-01

    The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of this analysis is to document the criticality computational method. The criticality

  7. Prediction model of critical weight loss in cancer patients during particle therapy.

    Science.gov (United States)

    Zhang, Zhihong; Zhu, Yu; Zhang, Lijuan; Wang, Ziying; Wan, Hongwei

    2018-01-01

    The objective of this study is to investigate the predictors of critical weight loss in cancer patients receiving particle therapy, and build a prediction model based on its predictive factors. Patients receiving particle therapy were enroled between June 2015 and June 2016. Body weight was measured at the start and end of particle therapy. Association between critical weight loss (defined as >5%) during particle therapy and patients' demographic, clinical characteristic, pre-therapeutic nutrition risk screening (NRS 2002) and BMI were evaluated by logistic regression and decision tree analysis. Finally, 375 cancer patients receiving particle therapy were included. Mean weight loss was 0.55 kg, and 11.5% of patients experienced critical weight loss during particle therapy. The main predictors of critical weight loss during particle therapy were head and neck tumour location, total radiation dose ≥70 Gy on the primary tumour, and without post-surgery, as indicated by both logistic regression and decision tree analysis. Prediction model that includes tumour locations, total radiation dose and post-surgery had a good predictive ability, with the area under receiver operating characteristic curve 0.79 (95% CI: 0.71-0.88) and 0.78 (95% CI: 0.69-0.86) for decision tree and logistic regression model, respectively. Cancer patients with head and neck tumour location, total radiation dose ≥70 Gy and without post-surgery were at higher risk of critical weight loss during particle therapy, and early intensive nutrition counselling or intervention should be target at this population. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    Science.gov (United States)

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  9. Prediction of Critical Power and W' in Hypoxia: Application to Work-Balance Modelling.

    Science.gov (United States)

    Townsend, Nathan E; Nichols, David S; Skiba, Philip F; Racinais, Sebastien; Périard, Julien D

    2017-01-01

    Purpose: Develop a prediction equation for critical power (CP) and work above CP (W') in hypoxia for use in the work-balance ([Formula: see text]) model. Methods: Nine trained male cyclists completed cycling time trials (TT; 12, 7, and 3 min) to determine CP and W' at five altitudes (250, 1,250, 2,250, 3,250, and 4,250 m). Least squares regression was used to predict CP and W' at altitude. A high-intensity intermittent test (HIIT) was performed at 250 and 2,250 m. Actual and predicted CP and W' were used to compute W' during HIIT using differential ([Formula: see text]) and integral ([Formula: see text]) forms of the [Formula: see text] model. Results: CP decreased at altitude ( P equations for CP and W' developed in this study are suitable for use with the [Formula: see text] model in acute hypoxia. This enables the application of [Formula: see text] modelling to training prescription and competition analysis at altitude.

  10. [Establishment of comprehensive prediction model of acute gastrointestinal injury classification of critically ill patients].

    Science.gov (United States)

    Wang, Yan; Wang, Jianrong; Liu, Weiwei; Zhang, Guangliang

    2018-03-25

    To develop the comprehensive prediction model of acute gastrointestinal injury (AGI) grades of critically ill patients. From April 2015 to November 2015, the binary channel gastrointestinal sounds (GIS) monitor system which has been developed and verified by the research group was used to gather and analyze the GIS of 60 consecutive critically ill patients who were admitted in Critical Care Medicine of Chinese PLA General Hospital. Also, the AGI grades (Grande I(-IIII(, the higher the level, the heavier the gastrointestinal dysfunction) were evaluated. Meanwhile, the clinical data and physiological and biochemical indexes of included patients were collected and recorded daily, including illness severity score (APACHE II( score, consisting of the acute physiology score, age grade and chronic health evaluation), sequential organ failure assessment (SOFA score, including respiration, coagulation, liver, cardioascular, central nervous system and kidney) and Glasgow coma scale (GCS); body mass index, blood lactate and glucose, and treatment details (including mechanical ventilation, sedatives, vasoactive drugs, enteral nutrition, etc.) Then principal component analysis was performed on the significantly correlated GIS (five indexes of gastrointestinal sounds were found to be negatively correlated with AGI grades, which included the number, percentage of time, mean power, maximum power and maximum time of GIS wave from the channel located at the stomach) and clinical factors after standardization. The top 5 post-normalized main components were selected for back-propagation (BP) neural network training, to establish comprehensive AGI grades models of critically ill patients based on the neural network model. The 60 patients aged 19 to 98 (mean 54.6) years and included 42 males (70.0%). There were 22 cases of multiple fractures, 15 cases of severe infection, 7 cases of cervical vertebral fracture, 7 cases of aortic repair, 5 cases of post-toxicosis and 4 cases of cerebral

  11. A mathematical model for predicting glucose levels in critically-ill patients: the PIGnOLI model

    Directory of Open Access Journals (Sweden)

    Zhongheng Zhang

    2015-06-01

    Full Text Available Background and Objectives. Glycemic control is of paramount importance in the intensive care unit. Presently, several BG control algorithms have been developed for clinical trials, but they are mostly based on experts’ opinion and consensus. There are no validated models predicting how glucose levels will change after initiating of insulin infusion in critically ill patients. The study aimed to develop an equation for initial insulin dose setting.Methods. A large critical care database was employed for the study. Linear regression model fitting was employed. Retested blood glucose was used as the independent variable. Insulin rate was forced into the model. Multivariable fractional polynomials and interaction terms were used to explore the complex relationships among covariates. The overall fit of the model was examined by using residuals and adjusted R-squared values. Regression diagnostics were used to explore the influence of outliers on the model.Main Results. A total of 6,487 ICU admissions requiring insulin pump therapy were identified. The dataset was randomly split into two subsets at 7 to 3 ratio. The initial model comprised fractional polynomials and interactions terms. However, this model was not stable by excluding several outliers. I fitted a simple linear model without interaction. The selected prediction model (Predicting Glucose Levels in ICU, PIGnOLI included variables of initial blood glucose, insulin rate, PO volume, total parental nutrition, body mass index (BMI, lactate, congestive heart failure, renal failure, liver disease, time interval of BS recheck, dextrose rate. Insulin rate was significantly associated with blood glucose reduction (coefficient: −0.52, 95% CI [−1.03, −0.01]. The parsimonious model was well validated with the validation subset, with an adjusted R-squared value of 0.8259.Conclusion. The study developed the PIGnOLI model for the initial insulin dose setting. Furthermore, experimental study is

  12. Prediction model to predict critical weight loss in patients with head and neck cancer during (chemo)radiotherapy.

    Science.gov (United States)

    Langius, Jacqueline A E; Twisk, Jos; Kampman, Martine; Doornaert, Patricia; Kramer, Mark H H; Weijs, Peter J M; Leemans, C René

    2016-01-01

    Patients with head and neck cancer (HNC) frequently encounter weight loss with multiple negative outcomes as a consequence. Adequate treatment is best achieved by early identification of patients at risk for critical weight loss. The objective of this study was to detect predictive factors for critical weight loss in patients with HNC receiving (chemo)radiotherapy ((C)RT). In this cohort study, 910 patients with HNC were included receiving RT (±surgery/concurrent chemotherapy) with curative intent. Body weight was measured at the start and end of (C)RT. Logistic regression and classification and regression tree (CART) analyses were used to analyse predictive factors for critical weight loss (defined as >5%) during (C)RT. Possible predictors included gender, age, WHO performance status, tumour location, TNM classification, treatment modality, RT technique (three-dimensional conformal RT (3D-RT) vs intensity-modulated RT (IMRT)), total dose on the primary tumour and RT on the elective or macroscopic lymph nodes. At the end of (C)RT, mean weight loss was 5.1±4.9%. Fifty percent of patients had critical weight loss during (C)RT. The main predictors for critical weight loss during (C)RT by both logistic and CART analyses were RT on the lymph nodes, higher RT dose on the primary tumour, receiving 3D-RT instead of IMRT, and younger age. Critical weight loss during (C)RT was prevalent in half of HNC patients. To predict critical weight loss, a practical prediction tree for adequate nutritional advice was developed, including the risk factors RT to the neck, higher RT dose, 3D-RT, and younger age. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Thermal hydraulic test for reactor safety system - Critical heat flux experiment and development of prediction models

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Soon Heung; Baek, Won Pil; Yang, Soo Hyung; No, Chang Hyun [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    2000-04-01

    To acquire CHF data through the experiments and develop prediction models, research was conducted. Final objectives of research are as follows: 1) Production of tube CHF data for low and middle pressure and mass flux and Flow Boiling Visualization. 2) Modification and suggestion of tube CHF prediction models. 3) Development of fuel bundle CHF prediction methodology base on tube CHF prediction models. The major results of research are as follows: 1) Production of the CHF data for low and middle pressure and mass flux. - Acquisition of CHF data (764) for low and middle pressure and flow conditions - Analysis of CHF trends based on the CHF data - Assessment of existing CHF prediction methods with the CHF data 2) Modification and suggestion of tube CHF prediction models. - Development of a unified CHF model applicable for a wide parametric range - Development of a threshold length correlation - Improvement of CHF look-up table using the threshold length correlation 3) Development of fuel bundle CHF prediction methodology base on tube CHF prediction models. - Development of bundle CHF prediction methodology using correction factor. 11 refs., 134 figs., 25 tabs. (Author)

  14. Constructing an everywhere and locally relevant predictive model of the West-African critical zone

    Science.gov (United States)

    Hector, B.; Cohard, J. M.; Pellarin, T.; Maxwell, R. M.; Cappelaere, B.; Demarty, J.; Grippa, M.; Kergoat, L.; Lebel, T.; Mamadou, O.; Mougin, E.; Panthou, G.; Peugeot, C.; Vandervaere, J. P.; Vischel, T.; Vouillamoz, J. M.

    2017-12-01

    Considering water resources and hydrologic hazards, West Africa is among the most vulnerable regions to face both climatic (e.g. with the observed intensification of precipitation) and anthropogenic changes. With +3% of demographic rate, the region experiences rapid land use changes and increased pressure on surface and groundwater resources with observed consequences on the hydrological cycle (water table rise result of the sahelian paradox, increase in flood occurrence, etc.) Managing large hydrosystems (such as transboundary aquifers or rivers basins as the Niger river) requires anticipation of such changes. However, the region significantly lacks observations, for constructing and validating critical zone (CZ) models able to predict future hydrologic regime, but also comprises hydrosystems which encompass strong environmental gradients (e.g. geological, climatic, ecological) with highly different dominating hydrological processes. We address these issues by constructing a high resolution (1 km²) regional scale physically-based model using ParFlow-CLM which allows modeling a wide range of processes without prior knowledge on their relative dominance. Our approach combines multiple scale modeling from local to meso and regional scales within the same theoretical framework. Local and meso-scale models are evaluated thanks to the rich AMMA-CATCH CZ observation database which covers 3 supersites with contrasted environments in Benin (Lat.: 9.8°N), Niger (Lat.: 13.3°N) and Mali (Lat.: 15.3°N). At the regional scale the lack of relevant map of soil hydrodynamic parameters is addressed using remote sensing data assimilation. Our first results show the model's ability to reproduce the known dominant hydrological processes (runoff generation, ET, groundwater recharge…) across the major West-African regions and allow us to conduct virtual experiments to explore the impact of global changes on the hydrosystems. This approach is a first step toward the construction of

  15. Evaluating predictions of critical oxygen desaturation events

    International Nuclear Information System (INIS)

    ElMoaqet, Hisham; Tilbury, Dawn M; Ramachandran, Satya Krishna

    2014-01-01

    This paper presents a new approach for evaluating predictions of oxygen saturation levels in blood ( SpO 2 ). A performance metric based on a threshold is proposed to evaluate  SpO 2 predictions based on whether or not they are able to capture critical desaturations in the  SpO 2 time series of patients. We use linear auto-regressive models built using historical  SpO 2 data to predict critical desaturation events with the proposed metric. In 20 s prediction intervals, 88%–94% of the critical events were captured with positive predictive values (PPVs) between 90% and 99%. Increasing the prediction horizon to 60 s, 46%–71% of the critical events were detected with PPVs between 81% and 97%. In both prediction horizons, more than 97% of the non-critical events were correctly classified. The overall classification capabilities for the developed predictive models were also investigated. The area under ROC curves for 60 s predictions from the developed models are between 0.86 and 0.98. Furthermore, we investigate the effect of including pulse rate (PR) dynamics in the models and predictions. We show no improvement in the percentage of the predicted critical desaturations if PR dynamics are incorporated into the  SpO 2 predictive models (p-value = 0.814). We also show that including the PR dynamics does not improve the earliest time at which critical  SpO 2 levels are predicted (p-value = 0.986). Our results indicate oxygen in blood is an effective input to the PR rather than vice versa. We demonstrate that the combination of predictive models with frequent pulse oximetry measurements can be used as a warning of critical oxygen desaturations that may have adverse effects on the health of patients. (paper)

  16. Comprehensive and critical review of the predictive properties of the various mass models

    International Nuclear Information System (INIS)

    Haustein, P.E.

    1984-01-01

    Since the publication of the 1975 Mass Predictions approximately 300 new atomic masses have been reported. These data come from a variety of experimental studies using diverse techniques and they span a mass range from the lightest isotopes to the very heaviest. It is instructive to compare these data with the 1975 predictions and several others (Moeller and Nix, Monahan, Serduke, Uno and Yamada which appeared latter. Extensive numerical and graphical analyses have been performed to examine the quality of the mass predictions from the various models and to identify features in these models that require correction. In general, there is only rough correlation between the ability of a particular model to reproduce the measured mass surface which had been used to refine its adjustable parameters and that model's ability to predict correctly the new masses. For some models distinct systematic features appear when the new mass data are plotted as functions of relevant physical variables. Global intercomparisons of all the models are made first, followed by several examples of types of analysis performed with individual mass models

  17. A dry-spot model for the prediction of critical heat flux in water boiling in bubbly flow regime

    International Nuclear Information System (INIS)

    Ha, Sang Jun; No, Hee Cheon

    1997-01-01

    This paper presents a prediction of critical heat flux (CHF) in bubbly flow regime using dry-spot model proposed recently by authors for pool and flow boiling CHF and existing correlations for forced convective heat transfer coefficient, active site density and bubble departure diameter in nucleate boiling region. Without any empirical constants always present in earlier models, comparisons of the model predictions with experimental data for upward flow of water in vertical, uniformly-heated round tubes are performed and show a good agreement. The parametric trends of CHF have been explored with respect to variation in pressure, tube diameter and length, mass flux and inlet subcooling

  18. Predicting the local impacts of energy development: a critical guide to forecasting methods and models

    Energy Technology Data Exchange (ETDEWEB)

    Sanderson, D.; O' Hare, M.

    1977-05-01

    Models forecasting second-order impacts from energy development vary in their methodology, output, assumptions, and quality. As a rough dichotomy, they either simulate community development over time or combine various submodels providing community snapshots at selected points in time. Using one or more methods - input/output models, gravity models, econometric models, cohort-survival models, or coefficient models - they estimate energy-development-stimulated employment, population, public and private service needs, and government revenues and expenditures at some future time (ranging from annual to average year predictions) and for different governmental jurisdictions (municipal, county, state, etc.). Underlying assumptions often conflict, reflecting their different sources - historical data, comparative data, surveys, and judgments about future conditions. Model quality, measured by special features, tests, exportability and usefulness to policy-makers, reveals careful and thorough work in some cases and hurried operations with insufficient in-depth analysis in others.

  19. Critical velocity and anaerobic paddling capacity determined by different mathematical models and number of predictive trials in canoe slalom.

    Science.gov (United States)

    Messias, Leonardo H D; Ferrari, Homero G; Reis, Ivan G M; Scariot, Pedro P M; Manchado-Gobatto, Fúlvia B

    2015-03-01

    The purpose of this study was to analyze if different combinations of trials as well as mathematical models can modify the aerobic and anaerobic estimates from critical velocity protocol applied in canoe slalom. Fourteen male elite slalom kayakers from Brazilian canoe slalom team (K1) were evaluated. Athletes were submitted to four predictive trials of 150, 300, 450 and 600 meters in a lake and the time to complete each trial was recorded. Critical velocity (CV-aerobic parameter) and anaerobic paddling capacity (APC-anaerobic parameter) were obtained by three mathematical models (Linear1=distance-time; Linear 2=velocity-1/time and Non-Linear = time-velocity). Linear 1 was chosen for comparison of predictive trials combinations. Standard combination (SC) was considered as the four trials (150, 300, 450 and 600 m). High fits of regression were obtained from all mathematical models (range - R² = 0.96-1.00). Repeated measures ANOVA pointed out differences of all mathematical models for CV (p = 0.006) and APC (p = 0.016) as well as R² (p = 0.033). Estimates obtained from the first (1) and the fourth (4) predictive trials (150 m = lowest; and 600 m = highest, respectively) were similar and highly correlated (r=0.98 for CV and r = 0.96 for APC) with the SC. In summary, methodological aspects must be considered in critical velocity application in canoe slalom, since different combinations of trials as well as mathematical models resulted in different aerobic and anaerobic estimates. Key pointsGreat attention must be given for methodological concerns regarding critical velocity protocol applied on canoe slalom, since different estimates were obtained depending on the mathematical model and the predictive trials used.Linear 1 showed the best fits of regression. Furthermore, to the best of our knowledge and considering practical applications, this model is the easiest one to calculate the estimates from critical velocity protocol. Considering this, the abyss between science

  20. Evaluation of cloud prediction and determination of critical relative humidity for a mesoscale numerical weather prediction model

    Energy Technology Data Exchange (ETDEWEB)

    Seaman, N.L.; Guo, Z.; Ackerman, T.P. [Pennsylvania State Univ., University Park, PA (United States)

    1996-04-01

    Predictions of cloud occurrence and vertical location from the Pennsylvannia State University/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) were evaluated statistically using cloud observations obtained at Coffeyville, Kansas, as part of the Second International satellite Cloud Climatology Project Regional Experiment campaign. Seventeen cases were selected for simulation during a November-December 1991 field study. MM5 was used to produce two sets of 36-km simulations, one with and one without four-dimensional data assimilation (FDDA), and a set of 12-km simulations without FDDA, but nested within the 36-km FDDA runs.

  1. Comparison of mortality prediction models and validation of SAPS II in critically ill burns patients.

    Science.gov (United States)

    Pantet, O; Faouzi, M; Brusselaers, N; Vernay, A; Berger, M M

    2016-06-30

    Specific burn outcome prediction scores such as the Abbreviated Burn Severity Index (ABSI), Ryan, Belgian Outcome of Burn Injury (BOBI) and revised Baux scores have been extensively studied. Validation studies of the critical care score SAPS II (Simplified Acute Physiology Score) have included burns patients but not addressed them as a cohort. The study aimed at comparing their performance in a Swiss burns intensive care unit (ICU) and to observe whether they were affected by a standardized definition of inhalation injury. We conducted a retrospective cohort study, including all consecutive ICU burn admissions (n=492) between 1996 and 2013: 5 epochs were defined by protocol changes. As required for SAPS II calculation, stays burned (TBSA) and inhalation injury (systematic standardized diagnosis since 2006). Study epochs were compared (χ2 test, ANOVA). Score performance was assessed by receiver operating characteristic curve analysis. SAPS II performed well (AUC 0.89), particularly in burns burns <40% TBSA. Ryan and BOBI scores were least accurate, as they heavily weight inhalation injury.

  2. Continuous Automated Model EvaluatiOn (CAMEO) complementing the critical assessment of structure prediction in CASP12.

    Science.gov (United States)

    Haas, Jürgen; Barbato, Alessandro; Behringer, Dario; Studer, Gabriel; Roth, Steven; Bertoni, Martino; Mostaguir, Khaled; Gumienny, Rafal; Schwede, Torsten

    2018-03-01

    Every second year, the community experiment "Critical Assessment of Techniques for Structure Prediction" (CASP) is conducting an independent blind assessment of structure prediction methods, providing a framework for comparing the performance of different approaches and discussing the latest developments in the field. Yet, developers of automated computational modeling methods clearly benefit from more frequent evaluations based on larger sets of data. The "Continuous Automated Model EvaluatiOn (CAMEO)" platform complements the CASP experiment by conducting fully automated blind prediction assessments based on the weekly pre-release of sequences of those structures, which are going to be published in the next release of the PDB Protein Data Bank. CAMEO publishes weekly benchmarking results based on models collected during a 4-day prediction window, on average assessing ca. 100 targets during a time frame of 5 weeks. CAMEO benchmarking data is generated consistently for all participating methods at the same point in time, enabling developers to benchmark and cross-validate their method's performance, and directly refer to the benchmarking results in publications. In order to facilitate server development and promote shorter release cycles, CAMEO sends weekly email with submission statistics and low performance warnings. Many participants of CASP have successfully employed CAMEO when preparing their methods for upcoming community experiments. CAMEO offers a variety of scores to allow benchmarking diverse aspects of structure prediction methods. By introducing new scoring schemes, CAMEO facilitates new development in areas of active research, for example, modeling quaternary structure, complexes, or ligand binding sites. © 2017 Wiley Periodicals, Inc.

  3. A dry-spot model for the prediction of critical heat flux in water boiling in bubbly flow regime

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Sang Jun; No, Hee Cheon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    This paper presents a prediction of critical heat flux (CHF) in bubbly flow regime using dry-spot model proposed recently by authors for pool and flow boiling CHF and existing correlations for forced convective heat transfer coefficient, active site density and bubble departure diameter in nucleate boiling region. Without any empirical constants always present in earlier models, comparisons of the model predictions with experimental data for upward flow of water in vertical, uniformly-heated round tubes are performed and show a good agreement. The parametric trends of CHF have been explored with respect to variations in pressure, tube diameter and length, mass flux and inlet subcooling. 16 refs., 6 figs., 1 tab. (Author)

  4. A dry-spot model for the prediction of critical heat flux in water boiling in bubbly flow regime

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Sang Jun; No, Hee Cheon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    This paper presents a prediction of critical heat flux (CHF) in bubbly flow regime using dry-spot model proposed recently by authors for pool and flow boiling CHF and existing correlations for forced convective heat transfer coefficient, active site density and bubble departure diameter in nucleate boiling region. Without any empirical constants always present in earlier models, comparisons of the model predictions with experimental data for upward flow of water in vertical, uniformly-heated round tubes are performed and show a good agreement. The parametric trends of CHF have been explored with respect to variations in pressure, tube diameter and length, mass flux and inlet subcooling. 16 refs., 6 figs., 1 tab. (Author)

  5. A study on the development of advanced models to predict the critical heat flux for water and liquid metals

    International Nuclear Information System (INIS)

    Lee, Yong Bum

    1994-02-01

    The critical heat flux (CHF) phenomenon in the two-phase convective flows has been an important issue in the fields of design and safety analysis of light water reactor (LWR) as well as sodium cooled liquid metal fast breeder reactor (LMFBR). Especially in the LWR application many physical aspects of the CHF phenomenon are understood and reliable correlations and mechanistic models to predict the CHF condition have been proposed. However, there are few correlations and models which are applicable to liquid metals. Compared with water, liquid metals show a divergent picture for boiling pattern. Therefore, the CHF conditions obtained from investigations with water cannot be applied to liquid metals. In this work a mechanistic model to predict the CHF of water and a correlation for liquid metals are developed. First, a mechanistic model to predict the CHF in flow boiling at low quality was developed based on the liquid sublayer dryout mechanism. In this approach the CHF is assumed to occur when a vapor blanket isolates the liquid sublayer from bulk liquid and then the liquid entering the sublayer falls short of balancing the rate of sublayer dryout by vaporization. Therefore, the vapor blanket velocity is the key parameter. In this work the vapor blanket velocity is theoretically determined based on mass, energy, and momentum balance and finally the mechanistic model to predict the CHF in flow boiling at low quality is developed. The accuracy of the present model is evaluated by comparing model predictions with the experimental data and tabular data of look-up tables. The predictions of the present model agree well with extensive CHF data. In the latter part a correlation to predict the CHF for liquid metals is developed based on the flow excursion mechanism. By using Baroczy two-phase frictional pressure drop correlation and Ledinegg instability criterion, the relationship between the CHF of liquid metals and the principal parameters is derived and finally the

  6. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...... seen as necessary in order to identify aggregate level effects of policy measures, but are questioned by many advocates of critical realist ontology. Using research into the relationship between urban structure and travel as an example, the paper discusses relevant research methods and the kinds...

  7. Investigating Predictive Role of Critical Thinking on Metacognition with Structural Equation Modeling

    Science.gov (United States)

    Arslan, Serhat

    2015-01-01

    The purpose of this study is to examine the relationships between critical thinking and metacognition. The sample of study consists of 390 university students who were enrolled in different programs at Sakarya University, in Turkey. In this study, the Critical Thinking Disposition Scale and Metacognitive Thinking Scale were used. The relationships…

  8. Development and validation of a prediction model for insulin-associated hypoglycemia in non-critically ill hospitalized adults.

    Science.gov (United States)

    Mathioudakis, Nestoras Nicolas; Everett, Estelle; Routh, Shuvodra; Pronovost, Peter J; Yeh, Hsin-Chieh; Golden, Sherita Hill; Saria, Suchi

    2018-01-01

    To develop and validate a multivariable prediction model for insulin-associated hypoglycemia in non-critically ill hospitalized adults. We collected pharmacologic, demographic, laboratory, and diagnostic data from 128 657 inpatient days in which at least 1 unit of subcutaneous insulin was administered in the absence of intravenous insulin, total parenteral nutrition, or insulin pump use (index days). These data were used to develop multivariable prediction models for biochemical and clinically significant hypoglycemia (blood glucose (BG) of ≤70 mg/dL and model development and validation, respectively. Using predictors of age, weight, admitting service, insulin doses, mean BG, nadir BG, BG coefficient of variation (CV BG ), diet status, type 1 diabetes, type 2 diabetes, acute kidney injury, chronic kidney disease (CKD), liver disease, and digestive disease, our model achieved a c-statistic of 0.77 (95% CI 0.75 to 0.78), positive likelihood ratio (+LR) of 3.5 (95% CI 3.4 to 3.6) and negative likelihood ratio (-LR) of 0.32 (95% CI 0.30 to 0.35) for prediction of biochemical hypoglycemia. Using predictors of sex, weight, insulin doses, mean BG, nadir BG, CV BG , diet status, type 1 diabetes, type 2 diabetes, CKD stage, and steroid use, our model achieved a c-statistic of 0.80 (95% CI 0.78 to 0.82), +LR of 3.8 (95% CI 3.7 to 4.0) and -LR of 0.2 (95% CI 0.2 to 0.3) for prediction of clinically significant hypoglycemia. Hospitalized patients at risk of insulin-associated hypoglycemia can be identified using validated prediction models, which may support the development of real-time preventive interventions.

  9. External validation of the Intensive Care National Audit & Research Centre (ICNARC) risk prediction model in critical care units in Scotland.

    Science.gov (United States)

    Harrison, David A; Lone, Nazir I; Haddow, Catriona; MacGillivray, Moranne; Khan, Angela; Cook, Brian; Rowan, Kathryn M

    2014-01-01

    Risk prediction models are used in critical care for risk stratification, summarising and communicating risk, supporting clinical decision-making and benchmarking performance. However, they require validation before they can be used with confidence, ideally using independently collected data from a different source to that used to develop the model. The aim of this study was to validate the Intensive Care National Audit & Research Centre (ICNARC) model using independently collected data from critical care units in Scotland. Data were extracted from the Scottish Intensive Care Society Audit Group (SICSAG) database for the years 2007 to 2009. Recoding and mapping of variables was performed, as required, to apply the ICNARC model (2009 recalibration) to the SICSAG data using standard computer algorithms. The performance of the ICNARC model was assessed for discrimination, calibration and overall fit and compared with that of the Acute Physiology And Chronic Health Evaluation (APACHE) II model. There were 29,626 admissions to 24 adult, general critical care units in Scotland between 1 January 2007 and 31 December 2009. After exclusions, 23,269 admissions were included in the analysis. The ICNARC model outperformed APACHE II on measures of discrimination (c index 0.848 versus 0.806), calibration (Hosmer-Lemeshow chi-squared statistic 18.8 versus 214) and overall fit (Brier's score 0.140 versus 0.157; Shapiro's R 0.652 versus 0.621). Model performance was consistent across the three years studied. The ICNARC model performed well when validated in an external population to that in which it was developed, using independently collected data.

  10. Predictive modelling of survival and length of stay in critically ill patients using sequential organ failure scores.

    Science.gov (United States)

    Houthooft, Rein; Ruyssinck, Joeri; van der Herten, Joachim; Stijven, Sean; Couckuyt, Ivo; Gadeyne, Bram; Ongenae, Femke; Colpaert, Kirsten; Decruyenaere, Johan; Dhaene, Tom; De Turck, Filip

    2015-03-01

    The length of stay of critically ill patients in the intensive care unit (ICU) is an indication of patient ICU resource usage and varies considerably. Planning of postoperative ICU admissions is important as ICUs often have no nonoccupied beds available. Estimation of the ICU bed availability for the next coming days is entirely based on clinical judgement by intensivists and therefore too inaccurate. For this reason, predictive models have much potential for improving planning for ICU patient admission. Our goal is to develop and optimize models for patient survival and ICU length of stay (LOS) based on monitored ICU patient data. Furthermore, these models are compared on their use of sequential organ failure (SOFA) scores as well as underlying raw data as input features. Different machine learning techniques are trained, using a 14,480 patient dataset, both on SOFA scores as well as their underlying raw data values from the first five days after admission, in order to predict (i) the patient LOS, and (ii) the patient mortality. Furthermore, to help physicians in assessing the prediction credibility, a probabilistic model is tailored to the output of our best-performing model, assigning a belief to each patient status prediction. A two-by-two grid is built, using the classification outputs of the mortality and prolonged stay predictors to improve the patient LOS regression models. For predicting patient mortality and a prolonged stay, the best performing model is a support vector machine (SVM) with GA,D=65.9% (area under the curve (AUC) of 0.77) and GS,L=73.2% (AUC of 0.82). In terms of LOS regression, the best performing model is support vector regression, achieving a mean absolute error of 1.79 days and a median absolute error of 1.22 days for those patients surviving a nonprolonged stay. Using a classification grid based on the predicted patient mortality and prolonged stay, allows more accurate modeling of the patient LOS. The detailed models allow to support

  11. Modeling and validation of a mechanistic tool (MEFISTO) for the prediction of critical power in BWR fuel assemblies

    International Nuclear Information System (INIS)

    Adamsson, Carl; Le Corre, Jean-Marie

    2011-01-01

    Highlights: → The MEFISTO code efficiently and accurately predicts the dryout event in a BWR fuel bundle, using a mechanistic model. → A hybrid approach between a fast and robust sub-channel analysis and a three-field two-phase analysis is adopted. → MEFISTO modeling approach, calibration, CPU usage, sensitivity, trend analysis and performance evaluation are presented. → The calibration parameters and process were carefully selected to preserve the mechanistic nature of the code. → The code dryout prediction performance is near the level of fuel-specific empirical dryout correlations. - Abstract: Westinghouse is currently developing the MEFISTO code with the main goal to achieve fast, robust, practical and reliable prediction of steady-state dryout Critical Power in Boiling Water Reactor (BWR) fuel bundle based on a mechanistic approach. A computationally efficient simulation scheme was used to achieve this goal, where the code resolves all relevant field (drop, steam and multi-film) mass balance equations, within the annular flow region, at the sub-channel level while relying on a fast and robust two-phase (liquid/steam) sub-channel solution to provide the cross-flow information. The MEFISTO code can hence provide highly detailed solution of the multi-film flow in BWR fuel bundle while enhancing flexibility and reducing the computer time by an order of magnitude as compared to a standard three-field sub-channel analysis approach. Models for the numerical computation of the one-dimensional field flowrate distributions in an open channel (e.g. a sub-channel), including the numerical treatment of field cross-flows, part-length rods, spacers grids and post-dryout conditions are presented in this paper. The MEFISTO code is then applied to dryout prediction in BWR fuel bundle using VIPRE-W as a fast and robust two-phase sub-channel driver code. The dryout power is numerically predicted by iterating on the bundle power so that the minimum film flowrate in the

  12. Prediction of critical heat flux using ANFIS

    Energy Technology Data Exchange (ETDEWEB)

    Zaferanlouei, Salman, E-mail: zaferanlouei@gmail.co [Nuclear Engineering and Physics Department, Faculty of Nuclear Engineering, Center of Excellence in Nuclear Engineering, Amirkabir University of Technology (Tehran Polytechnic), 424 Hafez Avenue, Tehran (Iran, Islamic Republic of); Rostamifard, Dariush; Setayeshi, Saeed [Nuclear Engineering and Physics Department, Faculty of Nuclear Engineering, Center of Excellence in Nuclear Engineering, Amirkabir University of Technology (Tehran Polytechnic), 424 Hafez Avenue, Tehran (Iran, Islamic Republic of)

    2010-06-15

    The prediction of Critical Heat Flux (CHF) is essential for water cooled nuclear reactors since it is an important parameter for the economic efficiency and safety of nuclear power plants. Therefore, in this study using Adaptive Neuro-Fuzzy Inference System (ANFIS), a new flexible tool is developed to predict CHF. The process of training and testing in this model is done by using a set of available published field data. The CHF values predicted by the ANFIS model are acceptable compared with the other prediction methods. We improve the ANN model that is proposed by to avoid overfitting. The obtained new ANN test errors are compared with ANFIS model test errors, subsequently. It is found that the ANFIS model with root mean square (RMS) test errors of 4.79%, 5.04% and 11.39%, in fixed inlet conditions and local conditions and fixed outlet conditions, respectively, has superior performance in predicting the CHF than the test error obtained from MLP Neural Network in fixed inlet and outlet conditions, however, ANFIS also has acceptable result to predict CHF in fixed local conditions.

  13. Prediction of critical heat flux using ANFIS

    International Nuclear Information System (INIS)

    Zaferanlouei, Salman; Rostamifard, Dariush; Setayeshi, Saeed

    2010-01-01

    The prediction of Critical Heat Flux (CHF) is essential for water cooled nuclear reactors since it is an important parameter for the economic efficiency and safety of nuclear power plants. Therefore, in this study using Adaptive Neuro-Fuzzy Inference System (ANFIS), a new flexible tool is developed to predict CHF. The process of training and testing in this model is done by using a set of available published field data. The CHF values predicted by the ANFIS model are acceptable compared with the other prediction methods. We improve the ANN model that is proposed by to avoid overfitting. The obtained new ANN test errors are compared with ANFIS model test errors, subsequently. It is found that the ANFIS model with root mean square (RMS) test errors of 4.79%, 5.04% and 11.39%, in fixed inlet conditions and local conditions and fixed outlet conditions, respectively, has superior performance in predicting the CHF than the test error obtained from MLP Neural Network in fixed inlet and outlet conditions, however, ANFIS also has acceptable result to predict CHF in fixed local conditions.

  14. Sensitivity of predictions in an effective model: Application to the chiral critical end point position in the Nambu-Jona-Lasinio model

    International Nuclear Information System (INIS)

    Biguet, Alexandre; Hansen, Hubert; Brugiere, Timothee; Costa, Pedro; Borgnat, Pierre

    2015-01-01

    The measurement of the position of the chiral critical end point (CEP) in the QCD phase diagram is under debate. While it is possible to predict its position by using effective models specifically built to reproduce some of the features of the underlying theory (QCD), the quality of the predictions (e.g., the CEP position) obtained by such effective models, depends on whether solving the model equations constitute a well- or ill-posed inverse problem. Considering these predictions as being inverse problems provides tools to evaluate if the problem is ill-conditioned, meaning that infinitesimal variations of the inputs of the model can cause comparatively large variations of the predictions. If it is ill-conditioned, it has major consequences because of finite variations that could come from experimental and/or theoretical errors. In the following, we shall apply such a reasoning on the predictions of a particular Nambu-Jona-Lasinio model within the mean field + ring approximations, with special attention to the prediction of the chiral CEP position in the (T-μ) plane. We find that the problem is ill-conditioned (i.e. very sensitive to input variations) for the T-coordinate of the CEP, whereas, it is well-posed for the μ-coordinate of the CEP. As a consequence, when the chiral condensate varies in a 10MeV range, μ CEP varies far less. As an illustration to understand how problematic this could be, we show that the main consequence when taking into account finite variation of the inputs, is that the existence of the CEP itself cannot be predicted anymore: for a deviation as low as 0.6% with respect to vacuum phenomenology (well within the estimation of the first correction to the ring approximation) the CEP may or may not exist. (orig.)

  15. Sensitivity of predictions in an effective model: Application to the chiral critical end point position in the Nambu-Jona-Lasinio model

    Energy Technology Data Exchange (ETDEWEB)

    Biguet, Alexandre; Hansen, Hubert; Brugiere, Timothee [Universite Claude Bernard de Lyon, Institut de Physique Nucleaire de Lyon, CNRS/IN2P3, Villeurbanne Cedex (France); Costa, Pedro [Universidade de Coimbra, Centro de Fisica Computacional, Departamento de Fisica, Coimbra (Portugal); Borgnat, Pierre [CNRS, l' Ecole normale superieure de Lyon, Laboratoire de Physique, Lyon Cedex 07 (France)

    2015-09-15

    The measurement of the position of the chiral critical end point (CEP) in the QCD phase diagram is under debate. While it is possible to predict its position by using effective models specifically built to reproduce some of the features of the underlying theory (QCD), the quality of the predictions (e.g., the CEP position) obtained by such effective models, depends on whether solving the model equations constitute a well- or ill-posed inverse problem. Considering these predictions as being inverse problems provides tools to evaluate if the problem is ill-conditioned, meaning that infinitesimal variations of the inputs of the model can cause comparatively large variations of the predictions. If it is ill-conditioned, it has major consequences because of finite variations that could come from experimental and/or theoretical errors. In the following, we shall apply such a reasoning on the predictions of a particular Nambu-Jona-Lasinio model within the mean field + ring approximations, with special attention to the prediction of the chiral CEP position in the (T-μ) plane. We find that the problem is ill-conditioned (i.e. very sensitive to input variations) for the T-coordinate of the CEP, whereas, it is well-posed for the μ-coordinate of the CEP. As a consequence, when the chiral condensate varies in a 10MeV range, μ {sub CEP} varies far less. As an illustration to understand how problematic this could be, we show that the main consequence when taking into account finite variation of the inputs, is that the existence of the CEP itself cannot be predicted anymore: for a deviation as low as 0.6% with respect to vacuum phenomenology (well within the estimation of the first correction to the ring approximation) the CEP may or may not exist. (orig.)

  16. Acute Pancreatitis as a Model to Predict Transition of Systemic Inflammation to Organ Failure in Trauma and Critical Illness

    Science.gov (United States)

    2017-10-01

    models ); • clinical interventions; • new business creation; and • other. Nothing to report. Nothing to report. Nothing to report. 17...AWARD NUMBER: W81XWH-14-1-0376 TITLE: Acute Pancreatitis as a Model to Predict Transition of Systemic Inflammation to Organ Failgure in Trauma...COVERED 22 Sep 2016 - 21 Sep 2017 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Acute Pancreatitis as a Model to Predict Transition of Systemic

  17. A general unified non-equilibrium model for predicting saturated and subcooled critical two-phase flow rates through short and long tubes

    International Nuclear Information System (INIS)

    Fraser, D.W.H.; Abdelmessih, A.H.

    1995-01-01

    A general unified model is developed to predict one-component critical two-phase pipe flow. Modelling of the two-phase flow is accomplished by describing the evolution of the flow between the location of flashing inception and the exit (critical) plane. The model approximates the nonequilibrium phase change process via thermodynamic equilibrium paths. Included are the relative effects of varying the location of flashing inception, pipe geometry, fluid properties and length to diameter ratio. The model predicts that a range of critical mass fluxes exist and is bound by a maximum and minimum value for a given thermodynamic state. This range is more pronounced at lower subcooled stagnation states and can be attributed to the variation in the location of flashing inception. The model is based on the results of an experimental study of the critical two-phase flow of saturated and subcooled water through long tubes. In that study, the location of flashing inception was accurately controlled and adjusted through the use of a new device. The data obtained revealed that for fixed stagnation conditions, the maximum critical mass flux occurred with flashing inception located near the pipe exit; while minimum critical mass fluxes occurred with the flashing front located further upstream. Available data since 1970 for both short and long tubes over a wide range of conditions are compared with the model predictions. This includes test section L/D ratios from 25 to 300 and covers a temperature and pressure range of 110 to 280 degrees C and 0.16 to 6.9 MPa. respectively. The predicted maximum and minimum critical mass fluxes show an excellent agreement with the range observed in the experimental data

  18. A general unified non-equilibrium model for predicting saturated and subcooled critical two-phase flow rates through short and long tubes

    Energy Technology Data Exchange (ETDEWEB)

    Fraser, D.W.H. [Univ. of British Columbia (Canada); Abdelmessih, A.H. [Univ. of Toronto, Ontario (Canada)

    1995-09-01

    A general unified model is developed to predict one-component critical two-phase pipe flow. Modelling of the two-phase flow is accomplished by describing the evolution of the flow between the location of flashing inception and the exit (critical) plane. The model approximates the nonequilibrium phase change process via thermodynamic equilibrium paths. Included are the relative effects of varying the location of flashing inception, pipe geometry, fluid properties and length to diameter ratio. The model predicts that a range of critical mass fluxes exist and is bound by a maximum and minimum value for a given thermodynamic state. This range is more pronounced at lower subcooled stagnation states and can be attributed to the variation in the location of flashing inception. The model is based on the results of an experimental study of the critical two-phase flow of saturated and subcooled water through long tubes. In that study, the location of flashing inception was accurately controlled and adjusted through the use of a new device. The data obtained revealed that for fixed stagnation conditions, the maximum critical mass flux occurred with flashing inception located near the pipe exit; while minimum critical mass fluxes occurred with the flashing front located further upstream. Available data since 1970 for both short and long tubes over a wide range of conditions are compared with the model predictions. This includes test section L/D ratios from 25 to 300 and covers a temperature and pressure range of 110 to 280{degrees}C and 0.16 to 6.9 MPa. respectively. The predicted maximum and minimum critical mass fluxes show an excellent agreement with the range observed in the experimental data.

  19. External Validation and Recalibration of Risk Prediction Models for Acute Traumatic Brain Injury among Critically Ill Adult Patients in the United Kingdom.

    Science.gov (United States)

    Harrison, David A; Griggs, Kathryn A; Prabhu, Gita; Gomes, Manuel; Lecky, Fiona E; Hutchinson, Peter J A; Menon, David K; Rowan, Kathryn M

    2015-10-01

    This study validates risk prediction models for acute traumatic brain injury (TBI) in critical care units in the United Kingdom and recalibrates the models to this population. The Risk Adjustment In Neurocritical care (RAIN) Study was a prospective, observational cohort study in 67 adult critical care units. Adult patients admitted to critical care following acute TBI with a last pre-sedation Glasgow Coma Scale score of less than 15 were recruited. The primary outcomes were mortality and unfavorable outcome (death or severe disability, assessed using the Extended Glasgow Outcome Scale) at six months following TBI. Of 3626 critical care unit admissions, 2975 were analyzed. Following imputation of missing outcomes, mortality at six months was 25.7% and unfavorable outcome 57.4%. Ten risk prediction models were validated from Hukkelhoven and colleagues, the Medical Research Council (MRC) Corticosteroid Randomisation After Significant Head Injury (CRASH) Trial Collaborators, and the International Mission for Prognosis and Analysis of Clinical Trials in TBI (IMPACT) group. The model with the best discrimination was the IMPACT "Lab" model (C index, 0.779 for mortality and 0.713 for unfavorable outcome). This model was well calibrated for mortality at six months but substantially under-predicted the risk of unfavorable outcome. Recalibration of the models resulted in small improvements in discrimination and excellent calibration for all models. The risk prediction models demonstrated sufficient statistical performance to support their use in research and audit but fell below the level required to guide individual patient decision-making. The published models for unfavorable outcome at six months had poor calibration in the UK critical care setting and the models recalibrated to this setting should be used in future research.

  20. A critical review of predictive models for the onset of significant void in forced-convection subcooled boiling

    International Nuclear Information System (INIS)

    Dorra, H.; Lee, S.C.; Bankoff, S.G.

    1993-06-01

    This predictive models for the onset of significant void (OSV) in forced-convection subcooled boiling are reviewed and compared with extensive data. Three analytical models and seven empirical correlations are considered in this review. These models and correlations are put onto a common basis and are compared, again on a common basis, with a variety of data. The evaluation of their range of validity and applicability under various operating conditions are discussed. The results show that the correlations of Saha-Zuber seems to be the best model to predict OSV in vertical subcooled boiling flow

  1. Acute Kidney Injury in Trauma Patients Admitted to Critical Care: Development and Validation of a Diagnostic Prediction Model.

    Science.gov (United States)

    Haines, Ryan W; Lin, Shih-Pin; Hewson, Russell; Kirwan, Christopher J; Torrance, Hew D; O'Dwyer, Michael J; West, Anita; Brohi, Karim; Pearse, Rupert M; Zolfaghari, Parjam; Prowle, John R

    2018-02-26

    Acute Kidney Injury (AKI) complicating major trauma is associated with increased mortality and morbidity. Traumatic AKI has specific risk factors and predictable time-course facilitating diagnostic modelling. In a single centre, retrospective observational study we developed risk prediction models for AKI after trauma based on data around intensive care admission. Models predicting AKI were developed using data from 830 patients, using data reduction followed by logistic regression, and were independently validated in a further 564 patients. AKI occurred in 163/830 (19.6%) with 42 (5.1%) receiving renal replacement therapy (RRT). First serum creatinine and phosphate, units of blood transfused in first 24 h, age and Charlson score discriminated need for RRT and AKI early after trauma. For RRT c-statistics were good to excellent: development: 0.92 (0.88-0.96), validation: 0.91 (0.86-0.97). Modelling AKI stage 2-3, c-statistics were also good, development: 0.81 (0.75-0.88) and validation: 0.83 (0.74-0.92). The model predicting AKI stage 1-3 performed moderately, development: c-statistic 0.77 (0.72-0.81), validation: 0.70 (0.64-0.77). Despite good discrimination of need for RRT, positive predictive values (PPV) at the optimal cut-off were only 23.0% (13.7-42.7) in development. However, PPV for the alternative endpoint of RRT and/or death improved to 41.2% (34.8-48.1) highlighting death as a clinically relevant endpoint to RRT.

  2. Predicting recovery from acute kidney injury in critically ill patients

    DEFF Research Database (Denmark)

    Itenov, Theis S; Berthelsen, Rasmus Ehrenfried; Jensen, Jens-Ulrik

    2018-01-01

    these patients. DESIGN: Observational study with development and validation of a risk prediction model. SETTING: Nine academic ICUs in Denmark. PARTICIPANTS: Development cohort of critically ill patients with AKI at ICU admission from the Procalcitonin and Survival Study cohort (n = 568), validation cohort.......1%. CONCLUSION: We constructed and validated a simple model that can predict the chance of recovery from AKI in critically ill patients....

  3. System Predicts Critical Runway Performance Parameters

    Science.gov (United States)

    Millen, Ernest W.; Person, Lee H., Jr.

    1990-01-01

    Runway-navigation-monitor (RNM) and critical-distances-process electronic equipment designed to provide pilot with timely and reliable predictive navigation information relating to takeoff, landing and runway-turnoff operations. Enables pilot to make critical decisions about runway maneuvers with high confidence during emergencies. Utilizes ground-referenced position data only to drive purely navigational monitor system independent of statuses of systems in aircraft.

  4. Criticism and Counter-Criticism of Public Management: Strategy Models

    OpenAIRE

    Luis C. Ortigueira

    2007-01-01

    Critical control is very important in scientific management. This paper presents models of critical and counter-critical public-management strategies, focusing on the types of criticism and counter-criticism manifested in parliamentary political debates. The paper includes: (i) a normative model showing how rational criticism can be carried out; (ii) a normative model for oral critical intervention; and (iii) a general motivational strategy model for criticisms and counter-criticisms. The pap...

  5. Test of phi(sup 2) model predictions near the (sup 3)He liquid-gas critical point

    Science.gov (United States)

    Barmatz, M.; Zhong, F.; Hahn, I.

    2000-01-01

    NASA is supporting the development of an experiment called MISTE (Microgravity Scaling Theory Experiment) for future International Space Station mission. The main objective of this flight experiment is to perform in-situ PVT, heat capacity at constant volume, C(sub v) and chi(sub tau), measurements in the asymptotic region near the (sup 3)He liquid-gas critical point.

  6. Numerical prediction of critical heat flux in nuclear fuel rod bundles with advanced three-fluid multidimensional porous media based model

    International Nuclear Information System (INIS)

    Zoran Stosic; Vladimir Stevanovic

    2005-01-01

    Full text of publication follows: The modern design of nuclear fuel rod bundles for Boiling Water Reactors (BWRs) is characterised with increased number of rods in the bundle, introduced part-length fuel rods and a water channel positioned along the bundle asymmetrically in regard to the centre of the bundle cross section. Such design causes significant spatial differences of volumetric heat flux, steam void fraction distribution, mass flux rate and other thermal-hydraulic parameters important for efficient cooling of nuclear fuel rods during normal steady-state and transient conditions. The prediction of the Critical Heat Flux (CHF) under these complex thermal-hydraulic conditions is of the prime importance for the safe and economic BWR operation. An efficient numerical method for the CHF prediction is developed based on the porous medium concept and multi-fluid two-phase flow models. Fuel rod bundle is observed as a porous medium with a two-phase flow through it. Coolant flow from the bundle entrance to the exit is characterised with the subsequent change of one-phase and several two-phase flow patterns. One fluid (one-phase) model is used for the prediction of liquid heating up in the bundle entrance region. Two-fluid modelling approach is applied to the bubbly and churn-turbulent vapour and liquid flows. Three-fluid modelling approach is applied to the annular flow pattern: liquid film on the rods wall, steam flow and droplets entrained in the steam stream. Every fluid stream in applied multi-fluid models is described with the mass, momentum and energy balance equations. Closure laws for the prediction of interfacial transfer processes are stated with the special emphasis on the prediction of the steam-water interface drag force, through the interface drag coefficient, and droplets entrainment and deposition rates for three-fluid annular flow model. The model implies non-equilibrium thermal and flow conditions. A new mechanistic approach for the CHF prediction

  7. Advances in criticality predictions for EBR-II

    International Nuclear Information System (INIS)

    Schaefer, R.W.; Imel, G.R.

    1994-01-01

    Improvements to startup criticality predictions for the EBR-II reactor have been made. More exact calculational models, methods and data are now used, and better procedures for obtaining experimental data that enter into the prediction are in place. Accuracy improved by more than a factor of two and the largest ECP error observed since the changes is only 18 cents. An experimental method using subcritical counts is also being implemented

  8. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  9. Critical Features of Fragment Libraries for Protein Structure Prediction.

    Science.gov (United States)

    Trevizani, Raphael; Custódio, Fábio Lima; Dos Santos, Karina Baptista; Dardenne, Laurent Emmanuel

    2017-01-01

    The use of fragment libraries is a popular approach among protein structure prediction methods and has proven to substantially improve the quality of predicted structures. However, some vital aspects of a fragment library that influence the accuracy of modeling a native structure remain to be determined. This study investigates some of these features. Particularly, we analyze the effect of using secondary structure prediction guiding fragments selection, different fragments sizes and the effect of structural clustering of fragments within libraries. To have a clearer view of how these factors affect protein structure prediction, we isolated the process of model building by fragment assembly from some common limitations associated with prediction methods, e.g., imprecise energy functions and optimization algorithms, by employing an exact structure-based objective function under a greedy algorithm. Our results indicate that shorter fragments reproduce the native structure more accurately than the longer. Libraries composed of multiple fragment lengths generate even better structures, where longer fragments show to be more useful at the beginning of the simulations. The use of many different fragment sizes shows little improvement when compared to predictions carried out with libraries that comprise only three different fragment sizes. Models obtained from libraries built using only sequence similarity are, on average, better than those built with a secondary structure prediction bias. However, we found that the use of secondary structure prediction allows greater reduction of the search space, which is invaluable for prediction methods. The results of this study can be critical guidelines for the use of fragment libraries in protein structure prediction.

  10. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  11. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  12. Prediction of critical heat flux in vertical pipe flow

    International Nuclear Information System (INIS)

    Levy, S.; Healzer, J.M.; Abdollahian, D.

    1981-01-01

    A previously developed semi-empirical model for adiabatic two-phase annular flow ix extended to predict the critical heat flux (CHF) in a vertical pipe. The model exhibits a sharply declining curve of CHF versus steam quality (X) at low X, and is relatively independent of the heat flux distribution. In this region, vaporization of the liquid film controls. At high X, net deposition upon the liquid film becomes important and CHF versus X flattens considerably. In this zone, CHF is dependent upon the heat flux distribution. Model predictions are compared to test data and an empirical correlation. The agreement is generally good if one employs previously reported mass transfer coefficients. (orig.)

  13. Uncertainty Estimates in Cold Critical Eigenvalue Predictions

    International Nuclear Information System (INIS)

    Karve, Atul A.; Moore, Brian R.; Mills, Vernon W.; Marrotte, Gary N.

    2005-01-01

    A recent cycle of a General Electric boiling water reactor performed two beginning-of-cycle local cold criticals. The eigenvalues estimated by the core simulator were 0.99826 and 1.00610. The large spread in them (= 0.00784) is a source of concern, and it is studied here. An analysis process is developed using statistical techniques, where first a transfer function relating the core observable Y (eigenvalue) to various factors (X's) is established. Engineering judgment is used to recognize the best candidates for X's. They are identified as power-weighted assembly k ∞ 's of selected assemblies around the withdrawn rods. These are a small subset of many X's that could potentially influence Y. However, the intention here is not to do a comprehensive study by accounting for all the X's. Rather, the scope is to demonstrate that the process developed is reasonable and to show its applicability to performing detailed studies. Variability in X's is obtained by perturbing nodal k ∞ 's since they directly influence the buckling term in the quasi-two-group diffusion equation model of the core simulator. Any perturbations introduced in them are bounded by standard well-established uncertainties. The resulting perturbations in the X's may not necessarily be directly correlated to physical attributes, but they encompass numerous biases and uncertainties credited to input and modeling uncertainties. The 'vital few' from the 'unimportant many' X's are determined, and then they are subgrouped according to assembly type, location, exposure, and control rod insertion. The goal is to study how the subgroups influence Y in order to have a better understanding of the variability observed in it

  14. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  15. A novel modeling to predict the critical current behavior of Nb$_{3}$Sn PIT strand under transverse load based on a scaling law and Finite Element Analysis

    CERN Document Server

    Wang, Tiening; Takayasu, Makoto; Bordini, Bernardo

    2014-01-01

    Superconducting Nb$_{3}$Sn Powder-In-Tube (PIT) strands could be used for the superconducting magnets of the next generation Large Hadron Collider. The strands are cabled into the typical flat Rutherford cable configuration. During the assembly of a magnet and its operation the strands experience not only longitudinal but also transverse load due to the pre-compression applied during the assembly and the Lorentz load felt when the magnets are energized. To properly design the magnets and guarantee their safe operation, mechanical load effects on the strand superconducting properties are studied extensively; particularly, many scaling laws based on tensile load experiments have been established to predict the critical current dependence on strain. However, the dependence of the superconducting properties on transverse load has not been extensively studied so far. One of the reasons is that transverse loading experiments are difficult to conduct due to the small diameter of the strand (about 1 mm) and the data ...

  16. The critical thinking curriculum model

    Science.gov (United States)

    Robertson, William Haviland

    The Critical Thinking Curriculum Model (CTCM) utilizes a multidisciplinary approach that integrates effective learning and teaching practices with computer technology. The model is designed to be flexible within a curriculum, an example for teachers to follow, where they can plug in their own critical issue. This process engages students in collaborative research that can be shared in the classroom, across the country or around the globe. The CTCM features open-ended and collaborative activities that deal with current, real world issues which leaders are attempting to solve. As implemented in the Critical Issues Forum (CIF), an educational program administered by Los Alamos National Laboratory (LANL), the CTCM encompasses the political, social/cultural, economic, and scientific realms in the context of a current global issue. In this way, students realize the importance of their schooling by applying their efforts to an endeavor that ultimately will affect their future. This study measures student attitudes toward science and technology and the changes that result from immersion in the CTCM. It also assesses the differences in student learning in science content and problem solving for students involved in the CTCM. A sample of 24 students participated in classrooms at two separate high schools in New Mexico. The evaluation results were analyzed using SPSS in a MANOVA format in order to determine the significance of the between and within-subjects effects. A comparison ANOVA was done for each two-way MANOVA to see if the comparison groups were equal. Significant findings were validated using the Scheffe test in a Post Hoc analysis. Demographic information for the sample population was recorded and tracked, including self-assessments of computer use and availability. Overall, the results indicated that the CTCM did help to increase science content understanding and problem-solving skills for students, thereby positively effecting critical thinking. No matter if the

  17. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  18. Prediction of critical thinking disposition based on mentoring among ...

    African Journals Online (AJOL)

    The results of study showed that there was a significantly positive correlation between Mentoring and Critical thinking disposition among faculty members. The findings showed that 67% of variance of critical thinking disposition was defined by predictive variables. The faculty members evaluated themselves in all mentoring ...

  19. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    CR cultural resource CRM cultural resource management CRPM Cultural Resource Predictive Modeling DoD Department of Defense ESTCP Environmental...resource management ( CRM ) legal obligations under NEPA and the NHPA, military installations need to demonstrate that CRM decisions are based on objective...maxim “one size does not fit all,” and demonstrate that DoD installations have many different CRM needs that can and should be met through a variety

  20. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  1. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  2. Safety prediction for basic components of safety critical software based on static testing

    International Nuclear Information System (INIS)

    Son, H.S.; Seong, P.H.

    2001-01-01

    The purpose of this work is to develop a safety prediction method, with which we can predict the risk of software components based on static testing results at the early development stage. The predictive model combines the major factor with the quality factor for the components, both of which are calculated based on the measures proposed in this work. The application to a safety-critical software system demonstrates the feasibility of the safety prediction method. (authors)

  3. Safety prediction for basic components of safety-critical software based on static testing

    International Nuclear Information System (INIS)

    Son, H.S.; Seong, P.H.

    2000-01-01

    The purpose of this work is to develop a safety prediction method, with which we can predict the risk of software components based on static testing results at the early development stage. The predictive model combines the major factor with the quality factor for the components, which are calculated based on the measures proposed in this work. The application to a safety-critical software system demonstrates the feasibility of the safety prediction method. (authors)

  4. Comparison of Critical Flow Models' Evaluations for SBLOCA Tests

    International Nuclear Information System (INIS)

    Kim, Yeon Sik; Park, Hyun Sik; Cho, Seok

    2016-01-01

    A comparison of critical flow models between the Trapp-Ransom and Henry-Fauske models for all SBLOCA (small break loss of coolant accident) scenarios of the ATLAS (Advanced thermal-hydraulic test loop for accident simulation) facility was performed using the MARS-KS code. For the comparison of the two critical models, the accumulated break mass was selected as the main parameter for the comparison between the analyses and tests. Four cases showed the same respective discharge coefficients between the two critical models, e.g., 6' CL (cold leg) break and 25%, 50%, and 100% DVI (direct vessel injection) breaks. In the case of the 4' CL break, no reasonable results were obtained with any possible Cd values. In addition, typical system behaviors, e.g., PZR (pressurizer) pressure and collapsed core water level, were also compared between the two critical models. Four cases showed the same respective discharge coefficients between the two critical models, e.g., 6' CL break and 25%, 50%, and 100% DVI breaks. In the case of the 4' CL break, no reasonable results were obtained with any possible Cd values. In addition, typical system behaviors, e.g., PZR pressure and collapsed core water level, were also compared between the two critical models. From the comparison between the two critical models for the CL breaks, the Trapp-Ransom model predicted quite well with respect to the other model for the smallest and larger breaks, e.g., 2', 6', and 8.5' CL breaks. In addition, from the comparison between the two critical models for the DVI breaks, the Trapp-Ransom model predicted quite well with respect to the other model for the smallest and larger breaks, e.g., 5%, 50%, and 100% DVI breaks. In the case of the 50% and 100% breaks, the two critical models predicted the test data quite well.

  5. Predictions of the marviken subcooled critical mass flux using the critical flow scaling parameters

    Energy Technology Data Exchange (ETDEWEB)

    Park, Choon Kyung; Chun, Se Young; Cho, Seok; Yang, Sun Ku; Chung, Moon Ki [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    A total of 386 critical flow data points from 19 runs of 27 runs in the Marviken Test were selected and compared with the predictions by the correlations based on the critical flow scaling parameters. The results show that the critical mass flux in the very large diameter pipe can be also characterized by two scaling parameters such as discharge coefficient and dimensionless subcooling (C{sub d,ref} and {Delta}{Tau}{sup *} {sub sub}). The agreement between the measured data and the predictions are excellent. 8 refs., 8 figs. 1 tab. (Author)

  6. Predictions of the marviken subcooled critical mass flux using the critical flow scaling parameters

    Energy Technology Data Exchange (ETDEWEB)

    Park, Choon Kyung; Chun, Se Young; Cho, Seok; Yang, Sun Ku; Chung, Moon Ki [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A total of 386 critical flow data points from 19 runs of 27 runs in the Marviken Test were selected and compared with the predictions by the correlations based on the critical flow scaling parameters. The results show that the critical mass flux in the very large diameter pipe can be also characterized by two scaling parameters such as discharge coefficient and dimensionless subcooling (C{sub d,ref} and {Delta}{Tau}{sup *} {sub sub}). The agreement between the measured data and the predictions are excellent. 8 refs., 8 figs. 1 tab. (Author)

  7. Critical Issues in Modelling Lymph Node Physiology

    Directory of Open Access Journals (Sweden)

    Dmitry Grebennikov

    2016-12-01

    Full Text Available In this study, we discuss critical issues in modelling the structure and function of lymph nodes (LNs, with emphasis on how LN physiology is related to its multi-scale structural organization. In addition to macroscopic domains such as B-cell follicles and the T cell zone, there are vascular networks which play a key role in the delivery of information to the inner parts of the LN, i.e., the conduit and blood microvascular networks. We propose object-oriented computational algorithms to model the 3D geometry of the fibroblastic reticular cell (FRC network and the microvasculature. Assuming that a conduit cylinder is densely packed with collagen fibers, the computational flow study predicted that the diffusion should be a dominating process in mass transport than convective flow. The geometry models are used to analyze the lymph flow properties through the conduit network in unperturbed- and damaged states of the LN. The analysis predicts that elimination of up to 60%–90% of edges is required to stop the lymph flux. This result suggests a high degree of functional robustness of the network.

  8. Improving Agent Based Modeling of Critical Incidents

    Directory of Open Access Journals (Sweden)

    Robert Till

    2010-04-01

    Full Text Available Agent Based Modeling (ABM is a powerful method that has been used to simulate potential critical incidents in the infrastructure and built environments. This paper will discuss the modeling of some critical incidents currently simulated using ABM and how they may be expanded and improved by using better physiological modeling, psychological modeling, modeling the actions of interveners, introducing Geographic Information Systems (GIS and open source models.

  9. Critical evidence for the prediction error theory in associative learning.

    Science.gov (United States)

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  10. Competition-induced criticality in a model of meme popularity.

    Science.gov (United States)

    Gleeson, James P; Ward, Jonathan A; O'Sullivan, Kevin P; Lee, William T

    2014-01-31

    Heavy-tailed distributions of meme popularity occur naturally in a model of meme diffusion on social networks. Competition between multiple memes for the limited resource of user attention is identified as the mechanism that poises the system at criticality. The popularity growth of each meme is described by a critical branching process, and asymptotic analysis predicts power-law distributions of popularity with very heavy tails (exponent α<2, unlike preferential-attachment models), similar to those seen in empirical data.

  11. Competition-Induced Criticality in a Model of Meme Popularity

    Science.gov (United States)

    Gleeson, James P.; Ward, Jonathan A.; O'Sullivan, Kevin P.; Lee, William T.

    2014-01-01

    Heavy-tailed distributions of meme popularity occur naturally in a model of meme diffusion on social networks. Competition between multiple memes for the limited resource of user attention is identified as the mechanism that poises the system at criticality. The popularity growth of each meme is described by a critical branching process, and asymptotic analysis predicts power-law distributions of popularity with very heavy tails (exponent α <2, unlike preferential-attachment models), similar to those seen in empirical data.

  12. Predictive information processing in music cognition. A critical review.

    Science.gov (United States)

    Rohrmeier, Martin A; Koelsch, Stefan

    2012-02-01

    Expectation and prediction constitute central mechanisms in the perception and cognition of music, which have been explored in theoretical and empirical accounts. We review the scope and limits of theoretical accounts of musical prediction with respect to feature-based and temporal prediction. While the concept of prediction is unproblematic for basic single-stream features such as melody, it is not straight-forward for polyphonic structures or higher-order features such as formal predictions. Behavioural results based on explicit and implicit (priming) paradigms provide evidence of priming in various domains that may reflect predictive behaviour. Computational learning models, including symbolic (fragment-based), probabilistic/graphical, or connectionist approaches, provide well-specified predictive models of specific features and feature combinations. While models match some experimental results, full-fledged music prediction cannot yet be modelled. Neuroscientific results regarding the early right-anterior negativity (ERAN) and mismatch negativity (MMN) reflect expectancy violations on different levels of processing complexity, and provide some neural evidence for different predictive mechanisms. At present, the combinations of neural and computational modelling methodologies are at early stages and require further research. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  14. Critical review of precompound models

    International Nuclear Information System (INIS)

    Jahn, H.

    1984-01-01

    It is shown that the desired predictive capability of much of the commonly used precompound formalism to calculate nuclear reaction cross-sections is seriously reduced by too much arbitrariness of the choice of parameters. The origin of this arbitrariness is analysed in detail and improvements or alternatives are discussed. (author)

  15. Consideration of critical heat flux margin prediction by subcooled or low quality critical heat flux correlations

    International Nuclear Information System (INIS)

    Hejzlar, P.; Todreas, N.E.

    1996-01-01

    The accurate prediction of the critical heat flux (CHF) margin which is a key design parameter in a variety of cooling and heating systems is of high importance. These margins are, for the low quality region, typically expressed in terms of critical heat flux ratios using the direct substitution method. Using a simple example of a heated tube, it is shown that CHF correlations of a certain type often used to predict CHF margins, expressed in this manner, may yield different results, strongly dependent on the correlation in use. It is argued that the application of the heat balance method to such correlations, which leads to expressing the CHF margins in terms of the critical power ratio, may be more appropriate. (orig.)

  16. Critical power prediction by CATHARE2 of the OECD/NRC BFBT benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lutsanych, Sergii, E-mail: s.lutsanych@ing.unipi.it [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, 56122, San Piero a Grado, Pisa (Italy); Sabotinov, Luben, E-mail: luben.sabotinov@irsn.fr [Institut for Radiological Protection and Nuclear Safety (IRSN), 31 avenue de la Division Leclerc, 92262 Fontenay-aux-Roses (France); D’Auria, Francesco, E-mail: francesco.dauria@dimnp.unipi.it [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, 56122, San Piero a Grado, Pisa (Italy)

    2015-03-15

    Highlights: • We used CATHARE code to calculate the critical power exercises of the OECD/NRC BFBT benchmark. • We considered both steady-state and transient critical power tests of the benchmark. • We used both the 1D and 3D features of the CATHARE code to simulate the experiments. • Acceptable prediction of the critical power and its location in the bundle is obtained using appropriate modelling. - Abstract: This paper presents an application of the French best estimate thermal-hydraulic code CATHARE 2 to calculate the critical power and departure from nucleate boiling (DNB) exercises of the International OECD/NRC BWR Fuel Bundle Test (BFBT) benchmark. The assessment activity is performed comparing the code calculation results with available in the framework of the benchmark experimental data from Japanese Nuclear Power Engineering Corporation (NUPEC). Two-phase flow calculations on prediction of the critical power have been carried out both in steady state and transient cases, using one-dimensional and three-dimensional modelling. Results of the steady-state critical power tests calculation have shown the ability of CATHARE code to predict reasonably the critical power and its location, using appropriate modelling.

  17. PREDICTED PERCENTAGE DISSATISFIED (PPD) MODEL ...

    African Journals Online (AJOL)

    HOD

    their low power requirements, are relatively cheap and are environment friendly. ... PREDICTED PERCENTAGE DISSATISFIED MODEL EVALUATION OF EVAPORATIVE COOLING ... The performance of direct evaporative coolers is a.

  18. Prediction of critical flow rates through power-operated relief valves

    International Nuclear Information System (INIS)

    Abdollahian, D.; Singh, A.

    1983-01-01

    Existing single-phase and two-phase critical flow models are used to predict the flow rates through the power-operated relief valves tested in the EPRI Safety and Relief Valve test program. For liquid upstream conditions, Homogeneous Equilibrium Model, Moody, Henry-Fauske and Burnell two-phase critical flow models are used for comparison with data. Under steam upstream conditions, the flow rates are predicted either by the single-phase isentropic equations or the Homogeneous Equilibrium Model, depending on the thermodynamic condition of the fluid at the choking plane. The results of the comparisons are used to specify discharge coefficients for different valves under steam and liquid upstream conditions and evaluate the existing approximate critical flow relations for a wide range of subcooled water and steam conditions

  19. Safety-critical Java on a time-predictable processor

    DEFF Research Database (Denmark)

    Korsholm, Stephan E.; Schoeberl, Martin; Puffitsch, Wolfgang

    2015-01-01

    For real-time systems the whole execution stack needs to be time-predictable and analyzable for the worst-case execution time (WCET). This paper presents a time-predictable platform for safety-critical Java. The platform consists of (1) the Patmos processor, which is a time-predictable processor......; (2) a C compiler for Patmos with support for WCET analysis; (3) the HVM, which is a Java-to-C compiler; (4) the HVM-SCJ implementation which supports SCJ Level 0, 1, and 2 (for both single and multicore platforms); and (5) a WCET analysis tool. We show that real-time Java programs translated to C...... and compiled to a Patmos binary can be analyzed by the AbsInt aiT WCET analysis tool. To the best of our knowledge the presented system is the second WCET analyzable real-time Java system; and the first one on top of a RISC processor....

  20. Mathematical modeling in biology: A critical assessment

    Energy Technology Data Exchange (ETDEWEB)

    Buiatti, M. [Florence, Univ. (Italy). Dipt. di Biologia Animale e Genetica

    1998-01-01

    The molecular revolution and the development of biology-derived industry have led in the last fifty years to an unprecedented `lead forward` of life sciences in terms of experimental data. Less success has been achieved in the organisation of such data and in the consequent development of adequate explanatory and predictive theories and models. After a brief historical excursus inborn difficulties of mathematisation of biological objects and processes derived from the complex dynamics of life are discussed along with the logical tools (simplifications, choice of observation points etc.) used to overcome them. `Autistic`, monodisciplinary attitudes towards biological modeling of mathematicians, physicists, biologists aimed in each case at the use of the tools of other disciplines to solve `selfish` problems are also taken into account and a warning against derived dangers (reification of mono disciplinary metaphors, lack of falsification etc.) is given. Finally `top.down` (deductive) and `bottom up` (inductive) heuristic interactive approaches to mathematisation are critically discussed with the help of serie of examples.

  1. Mathematical modeling in biology: A critical assessment

    International Nuclear Information System (INIS)

    Buiatti, M.

    1998-01-01

    The molecular revolution and the development of biology-derived industry have led in the last fifty years to an unprecedented 'lead forward' of life sciences in terms of experimental data. Less success has been achieved in the organisation of such data and in the consequent development of adequate explanatory and predictive theories and models. After a brief historical excursus inborn difficulties of mathematisation of biological objects and processes derived from the complex dynamics of life are discussed along with the logical tools (simplifications, choice of observation points etc.) used to overcome them. 'Autistic', monodisciplinary attitudes towards biological modeling of mathematicians, physicists, biologists aimed in each case at the use of the tools of other disciplines to solve 'selfish' problems are also taken into account and a warning against derived dangers (reification of mono disciplinary metaphors, lack of falsification etc.) is given. Finally 'top.down' (deductive) and 'bottom up' (inductive) heuristic interactive approaches to mathematisation are critically discussed with the help of serie of examples

  2. Criticality predicts maximum irregularity in recurrent networks of excitatory nodes.

    Directory of Open Access Journals (Sweden)

    Yahya Karimipanah

    Full Text Available A rigorous understanding of brain dynamics and function requires a conceptual bridge between multiple levels of organization, including neural spiking and network-level population activity. Mounting evidence suggests that neural networks of cerebral cortex operate at a critical regime, which is defined as a transition point between two phases of short lasting and chaotic activity. However, despite the fact that criticality brings about certain functional advantages for information processing, its supporting evidence is still far from conclusive, as it has been mostly based on power law scaling of size and durations of cascades of activity. Moreover, to what degree such hypothesis could explain some fundamental features of neural activity is still largely unknown. One of the most prevalent features of cortical activity in vivo is known to be spike irregularity of spike trains, which is measured in terms of the coefficient of variation (CV larger than one. Here, using a minimal computational model of excitatory nodes, we show that irregular spiking (CV > 1 naturally emerges in a recurrent network operating at criticality. More importantly, we show that even at the presence of other sources of spike irregularity, being at criticality maximizes the mean coefficient of variation of neurons, thereby maximizing their spike irregularity. Furthermore, we also show that such a maximized irregularity results in maximum correlation between neuronal firing rates and their corresponding spike irregularity (measured in terms of CV. On the one hand, using a model in the universality class of directed percolation, we propose new hallmarks of criticality at single-unit level, which could be applicable to any network of excitable nodes. On the other hand, given the controversy of the neural criticality hypothesis, we discuss the limitation of this approach to neural systems and to what degree they support the criticality hypothesis in real neural networks. Finally

  3. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  4. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  5. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  6. Predicting critical heat flux in slug flow regime of uniformly heated ...

    African Journals Online (AJOL)

    Numerical computation code (PWR-DNBP) has been developed to predict Critical Heat Flux (CHF) of forced convective flow of water in a vertical heated channel. The code was based on the liquid sub-layer model, with the assumption that CHF occurred when the liquid film thickness between the heated surface and vapour ...

  7. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  8. Causal explanation, intentionality, and prediction: Evaluating the Criticism of "Deductivism"

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2001-01-01

    In a number of influential contributions, Tony Lawson has attacked a view of science that he refers to as deductivism, and criticized economists for implicitly using it in their research. Lawson argues that deductivism is simply the covering-law model, also known as the causal model of scientific...... critisizes the use of universal laws in social science, especially in economics. This view cannot be as easily dismissed as his general criticism of causal explanation. We argue that a number of arguments often used against the existence of (correct) universal laws in the social sciences can be put...... into question. First, it is argued that entities need not be identical, or even remotely alike, to be applicable to the same law. What is necessary is that they have common properties, e.g. mass in physics, and that the law relates to that property (section 6). Second, one might take the so-called model...

  9. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... signal based on a process model, coping with constraints on inputs and ... paper, we will present an introduction to the theory and application of MPC with Matlab codes ... section 5 presents the simulation results and section 6.

  10. Transport critical current density in flux creep model

    International Nuclear Information System (INIS)

    Wang, J.; Taylor, K.N.R.; Russell, G.J.; Yue, Y.

    1992-01-01

    The magnetic flux creep model has been used to derive the temperature dependence of the critical current density in high temperature superconductors. The generally positive curvature of the J c -T diagram is predicted in terms of two interdependent dimensionless fitting parameters. In this paper, the results are compared with both SIS and SNS junction models of these granular materials, neither of which provides a satisfactory prediction of the experimental data. A hybrid model combining the flux creep and SNS mechanisms is shown to be able to account for the linear regions of the J c -T behavior which are observed in some materials

  11. A critical review of clarifier modelling

    DEFF Research Database (Denmark)

    Plósz, Benedek; Nopens, Ingmar; Rieger, Leiv

    This outline paper aims to provide a critical review of secondary settling tank (SST) modelling approaches used in current wastewater engineering and develop tools not yet applied in practice. We address the development of different tier models and experimental techniques in the field...

  12. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  13. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  14. Critical assessment of nuclear mass models

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.

    1992-01-01

    Some of the physical assumptions underlying various nuclear mass models are discussed. The ability of different mass models to predict new masses that were not taken into account when the models were formulated and their parameters determined is analyzed. The models are also compared with respect to their ability to describe nuclear-structure properties in general. The analysis suggests future directions for mass-model development

  15. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  16. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  17. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  18. A New Energy-Critical Plane Damage Parameter for Multiaxial Fatigue Life Prediction of Turbine Blades

    Directory of Open Access Journals (Sweden)

    Zheng-Yong Yu

    2017-05-01

    Full Text Available As one of fracture critical components of an aircraft engine, accurate life prediction of a turbine blade to disk attachment is significant for ensuring the engine structural integrity and reliability. Fatigue failure of a turbine blade is often caused under multiaxial cyclic loadings at high temperatures. In this paper, considering different failure types, a new energy-critical plane damage parameter is proposed for multiaxial fatigue life prediction, and no extra fitted material constants will be needed for practical applications. Moreover, three multiaxial models with maximum damage parameters on the critical plane are evaluated under tension-compression and tension-torsion loadings. Experimental data of GH4169 under proportional and non-proportional fatigue loadings and a case study of a turbine disk-blade contact system are introduced for model validation. Results show that model predictions by Wang-Brown (WB and Fatemi-Socie (FS models with maximum damage parameters are conservative and acceptable. For the turbine disk-blade contact system, both of the proposed damage parameters and Smith-Watson-Topper (SWT model show reasonably acceptable correlations with its field number of flight cycles. However, life estimations of the turbine blade reveal that the definition of the maximum damage parameter is not reasonable for the WB model but effective for both the FS and SWT models.

  19. Prediction is difficult, preparation is critical and possible

    DEFF Research Database (Denmark)

    Zilli, Romano; Dalton, Luke; Ooms, Wim

    at the level of the source of infection, transmission pathways, and the outcomes. Changes to such challenges and uncertainties are inevitable and foresight in identifying strategies is required for us to prepare for a sustainable future. The EU-funded Global Network on Infectious Diseases of Animals...... and technological needs, including research capacity and support structures to prevent, control or mitigate animal health and zoonotic challenges for 2030 and beyond. While our ability to predict the future is often limited, being prepared to engage with whatever may happen is critical. Methods: Foresight workshops...... to give an overall list in which transnational data sharing, knowledge transfer, public-private partnerships, vaccinology/immunology, vector control, antimicrobial resistance, socioeconomics, genetics/bioinformatics and utilisation of big data rated highly. Conclusion: The outputs of the STAR...

  20. Critically Tapered Wedges and Critical State Soil Mechanics: Porosity-based Pressure Prediction in the Nankai Accretionary Prism.

    Science.gov (United States)

    Flemings, P. B.; Saffer, D. M.

    2016-12-01

    We predict pore pressure from porosity measurements at ODP Sites 1174 and 808 in the Nankai Accretionary prism, offshore Japan. For a range of friction angles (5-30 degrees), we estimate that the pore pressure ratio (λ*) ranges from 0.5 to 0.8: the pore pressure supports 50% to 80% of the overburden. Higher friction angles result in higher pressures. For the majority of the scenarios, pressures within the prism parallel the lithostat and are greater than the pressures beneath it. Our results support previous qualitative interpretations at Nankai and elsewhere suggesting that lower porosity above the décollement than below reflects higher mean effective stress there. By coupling a critical state soil model (Modified Cam Clay), which describes porosity as a function of mean and deviator stress, with a stress model that considers the difference in stress states above and below the décollement, we quantitatively show that the prism porosities record significant overpressure despite their lower porosity. As the soil is consumed by the advancing prism, changes in both mean and shear stress drive overpressure generation. Even in the extreme case where only change in mean stress is considered (a vertical end cap model), significant overpressures are generated. The high pressures we predict require an effective friction coefficient (µb') at the décollement of 0.023-0.038. Assuming that the pore pressure at the décollement lies between the values we report for the wedge and the underthrusting sediments, these effective friction coefficients correspond to intrinsic friction coefficients of µb= 0.08-0.38 (f = 4.6 - 21°). These values are comparable to friction coefficients of 0.1-0.4 reported for clay-dominated fault zones in a wide range of settings. By coupling the critical wedge model with an appropriate constitutive model, we present a systematic approach to predict pressure in thrust systems.

  1. Critically Important Object Security System Element Model

    Directory of Open Access Journals (Sweden)

    I. V. Khomyackov

    2012-03-01

    Full Text Available A stochastic model of critically important object security system element has been developed. The model includes mathematical description of the security system element properties and external influences. The state evolution of the security system element is described by the semi-Markov process with finite states number, the semi-Markov matrix and the initial semi-Markov process states probabilities distribution. External influences are set with the intensity of the Poisson thread.

  2. Critical Review of Membrane Bioreactor Models

    DEFF Research Database (Denmark)

    Naessens, W.; Maere, T.; Ratkovich, Nicolas Rios

    2012-01-01

    Membrane bioreactor technology exists for a couple of decades, but has not yet overwhelmed the market due to some serious drawbacks of which operational cost due to fouling is the major contributor. Knowledge buildup and optimisation for such complex systems can heavily benefit from mathematical...... modelling. In this paper, the vast literature on hydrodynamic and integrated modelling in MBR is critically reviewed. Hydrodynamic models are used at different scales and focus mainly on fouling and only little on system design/optimisation. Integrated models also focus on fouling although the ones...

  3. On the criticality of inferred models

    Science.gov (United States)

    Mastromatteo, Iacopo; Marsili, Matteo

    2011-10-01

    Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality.

  4. On the criticality of inferred models

    International Nuclear Information System (INIS)

    Mastromatteo, Iacopo; Marsili, Matteo

    2011-01-01

    Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality

  5. Critical-state model for the determination of critical currents in disk-shaped superconductors

    International Nuclear Information System (INIS)

    Frankel, D.J.

    1979-01-01

    A series of experiments has been carried out on the flux trapping and shielding capabilities of a flat strip of Nb-Ti/Cu composite material. A circular piece of material from the strip was tested in a uniform field directed perpendicularly to the surface of the sample. Profiles of the normal component of the field along the sample diameter were measured. The critical-state model was adapted for this geometry and proved capable of reproducing the measured field profiles. Model curves agreed well with experimental field profiles generated when the full sample was in the critical state, when only a portion of the sample was in the critical state, and when profiles were obtained after the direction of the rate change of the magnetic field was reversed. The adaption of the critical-state model to disk geometry provides a possible method either to derive values of the critical current from measurements of field profiles above thin flat samples, or to predict the trapping and shielding behavior of such samples if the critical current is already known. This method of determining critical currents does not require that samples be formed into narrow strips or wires, as is required for direct measurements of J/sub c/, or into tubes or cylinders, as is usually required for magnetization-type measurements. Only a relatively small approximately circular piece of material is needed. The method relies on induced currents, so there is no need to pass large currents into the sample. The field-profile measurements are easily performed with expensive Hall probes and do not require detection of the resistive transition of the superconductor

  6. Prediction of critical heat flux by a new local condition hypothesis

    International Nuclear Information System (INIS)

    Im, J. H.; Jun, K. D.; Sim, J. W.; Deng, Zhijian

    1998-01-01

    Critical Heat Flux(CHF) was predicted for uniformly heated vertical round tube by a new local condition hypothesis which incorporates a local true steam quality. This model successfully overcame the difficulties in predicted the subcooled and quality CHF by the thermodynamic equilibrium quality. The local true steam quality is a dependent variable of the thermodynamic equilibrium quality at the exit and the quality at the Onset of Significant Vaporization(OSV). The exit thermodynamic equilibrium quality was obtained from the heat balance, and the quality at OSV was obtained from the Saha-Zuber correlation. In the past CHF has been predicted by the experimental correlation based on local or non-local condition hypothesis. This preliminary study showed that all the available world data on uniform CHF could be predicted by the model based on the local condition hypothesis

  7. Critical assessment of methods of protein structure prediction (CASP)-round IX

    KAUST Repository

    Moult, John; Fidelis, Krzysztof; Kryshtafovych, Andriy; Tramontano, Anna

    2011-01-01

    This article is an introduction to the special issue of the journal PROTEINS, dedicated to the ninth Critical Assessment of Structure Prediction (CASP) experiment to assess the state of the art in protein structure modeling. The article describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. Methods for modeling protein structure continue to advance, although at a more modest pace than in the early CASP experiments. CASP developments of note are indications of improvement in model accuracy for some classes of target, an improved ability to choose the most accurate of a set of generated models, and evidence of improvement in accuracy for short "new fold" models. In addition, a new analysis of regions of models not derivable from the most obvious template structure has revealed better performance than expected.

  8. Predicting extinction rates in stochastic epidemic models

    International Nuclear Information System (INIS)

    Schwartz, Ira B; Billings, Lora; Dykman, Mark; Landsman, Alexandra

    2009-01-01

    We investigate the stochastic extinction processes in a class of epidemic models. Motivated by the process of natural disease extinction in epidemics, we examine the rate of extinction as a function of disease spread. We show that the effective entropic barrier for extinction in a susceptible–infected–susceptible epidemic model displays scaling with the distance to the bifurcation point, with an unusual critical exponent. We make a direct comparison between predictions and numerical simulations. We also consider the effect of non-Gaussian vaccine schedules, and show numerically how the extinction process may be enhanced when the vaccine schedules are Poisson distributed

  9. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  10. Critical analysis of algebraic collective models

    International Nuclear Information System (INIS)

    Moshinsky, M.

    1986-01-01

    The author shall understand by algebraic collective models all those based on specific Lie algebras, whether the latter are suggested through simple shell model considerations like in the case of the Interacting Boson Approximation (IBA), or have a detailed microscopic foundation like the symplectic model. To analyze these models critically, it is convenient to take a simple conceptual example of them in which all steps can be implemented analytically or through elementary numerical analysis. In this note he takes as an example the symplectic model in a two dimensional space i.e. based on a sp(4,R) Lie algebra, and show how through its complete discussion we can get a clearer understanding of the structure of algebraic collective models of nuclei. In particular he discusses the association of Hamiltonians, related to maximal subalgebras of our basic Lie algebra, with specific types of spectra, and the connections between spectra and shapes

  11. Critical review of glass performance modeling

    International Nuclear Information System (INIS)

    Bourcier, W.L.

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process

  12. Critical review of glass performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process.

  13. Homogeneous nonequilibrium critical flashing flow with a cavity flooding model

    International Nuclear Information System (INIS)

    Lee, S.Y.; Schrock, V.E.

    1989-01-01

    The primary purpose of the work presented here is to describe the model for pressure undershoot at incipient flashing in the critical flow of straight channels (Fanno-type flow) for subcooled or saturated stagnation conditions on a more physical basis. In previous models, a modification of the pressure undershoot prediction of Alamgir and Lienhard was used. Their method assumed nucleation occurs on the bounding walls as a result of molecular fluctuations. Without modification it overpredicts the pressure undershoot. In the present work the authors develop a mechanistic model for nucleation from wall cavities. This physical concept is more consistent with experimental data

  14. Detecting, anticipating, and predicting critical transitions in spatially extended systems.

    Science.gov (United States)

    Kwasniok, Frank

    2018-03-01

    A data-driven linear framework for detecting, anticipating, and predicting incipient bifurcations in spatially extended systems based on principal oscillation pattern (POP) analysis is discussed. The dynamics are assumed to be governed by a system of linear stochastic differential equations which is estimated from the data. The principal modes of the system together with corresponding decay or growth rates and oscillation frequencies are extracted as the eigenvectors and eigenvalues of the system matrix. The method can be applied to stationary datasets to identify the least stable modes and assess the proximity to instability; it can also be applied to nonstationary datasets using a sliding window approach to track the changing eigenvalues and eigenvectors of the system. As a further step, a genuinely nonstationary POP analysis is introduced. Here, the system matrix of the linear stochastic model is time-dependent, allowing for extrapolation and prediction of instabilities beyond the learning data window. The methods are demonstrated and explored using the one-dimensional Swift-Hohenberg equation as an example, focusing on the dynamics of stochastic fluctuations around the homogeneous stable state prior to the first bifurcation. The POP-based techniques are able to extract and track the least stable eigenvalues and eigenvectors of the system; the nonstationary POP analysis successfully predicts the timing of the first instability and the unstable mode well beyond the learning data window.

  15. The Biomantle-Critical Zone Model

    Science.gov (United States)

    Johnson, D. L.; Lin, H.

    2006-12-01

    It is a fact that established fields, like geomorphology, soil science, and pedology, which treat near surface and surface processes, are undergoing conceptual changes. Disciplinary self examinations are rife. New practitioners are joining these fields, bringing novel and interdisciplinary ideas. Such new names as "Earth's critical zone," "near surface geophysics," and "weathering engine" are being coined for research groups. Their agendas reflect an effort to integrate and reenergize established fields and break new ground. The new discipline "hydropedology" integrates soil science with hydrologic principles, and recent biodynamic investigations have spawned "biomantle" concepts and principles. One force behind these sea shifts may be retrospectives whereby disciplines periodically re-invent themselves to meet new challenges. Such retrospectives may be manifest in the recent Science issue on "Soils, The Final Frontier" (11 June, 2004), and in recent National Research Council reports that have set challenges to science for the next three decades (Basic Research Opportunities in Earth Science, and Grand Challenges for the Environmental Sciences, both published in 2001). In keeping with such changes, we advocate the integration of biomantle and critical zone concepts into a general model of Earth's soil. (The scope of the model automatically includes the domain of hydropedology.) Our justification is that the integration makes for a more appealing holistic, and realistic, model for the domain of Earth's soil at any scale. The focus is on the biodynamics of the biomantle and water flow within the critical zone. In this general model the biomantle is the epidermis of the critical zone, which extends to the base of the aquifer. We define soil as the outer layer of landforms on planets and similar bodies altered by biological, chemical, and/or physical agents. Because Earth is the only planet with biological agents, as far as we know, it is the only one that has all

  16. Modelling critical NDVI curves in perennial ryegrass

    DEFF Research Database (Denmark)

    Gislum, R; Boelt, B

    2010-01-01

      The use of optical sensors to measure canopy reflectance and calculate crop index as e.g. normalized difference vegetation index (NDVI) is widely used in agricultural crops, but has so far not been implemented in herbage seed production. The present study has the purpose to develop a critical...... NDVI curve where the critical NDVI, defined as the minimum NDVI obtained to achieve a high seed yield, will be modelled during the growing season. NDVI measurements were made at different growing degree days (GDD) in a three year field experiment where different N application rates were applied....... There was a clear maximum in the correlation coefficient between seed yield and NDVI in the period from approximately 700 to 900 GDD. At this time there was an exponential relationship between NDVI and seed yield where highest seed yield were at NDVI ~0.9. Theoretically the farmers should aim for an NDVI of 0...

  17. The effect of virtual mass on the prediction of critical flow

    International Nuclear Information System (INIS)

    Cheng, L.; Lahey, R.T.; Drew, D.A.

    1983-01-01

    By observing the results in Fig. 4 and Fig. 5 we can see that virtual mass effects are important in predicting critical flow. However, as seen in Fig. 7a, in which all three flows are predicted to be critical (Δ=0), it is difficult to distinguish one set of conditions from the other by just considering the pressure profile. Clearly more detailed data, such as the throat void fraction, is needed for discrimination between these calculations. Moreover, since the calculated critical flows have been found to be sensitive to initial mass flux, and void fraction, careful measurements of those parameters are needed before accurate virtual mass parameters can be determined from these data. It can be concluded that the existing Moby Dick data is inadequate to allow one to deduce accurate values of the virtual mass parameters C/sub VM/ and λ. Nevertheless, more careful experiments of this type are uniquely suited for the determination of these important parameters. It appears that the use of a nine equation model, such as that discussed herein, coupled with more detailed accurate critical flow data is an effective means of determining the parameters in interfacial momentum transfer models, such as virtual mass effects, which are only important during strong spatial accelerations. Indeed, there are few other methods available which can be used for such determinations

  18. Critical behavior in a microcanonical multifragmentation model

    International Nuclear Information System (INIS)

    Raduta, A.H.; Raduta, A.R.; Chomaz, Ph.; Raduta, A.H.; Raduta, A.R.; Gulminelli, F.

    2001-01-01

    Scaling properties of the fragment size distributions are studied in a microcanonical multifragmentation model. A new method based on the global quality of the scaling function is presented. Scaling is not washed out by the long range Coulomb interaction nor by secondary decays for a wide range of source masses, densities and deposited energies. However, the influence of these factors on precise value of the critical exponents as well as the finite size corrections to scaling are shown to be important and to affect the possible determination of a specific universality class. (authors)

  19. Two dimensional critical models on a torus

    International Nuclear Information System (INIS)

    Saleur, H.; Di Francesco, P.

    1987-01-01

    After the general developments of conformal invariance in two dimensions, it was realized that the study of critical models in finite geometries, in addition to the practical information it could provide through finite size scaling, was also of great conceptual interest. The simplest example is the case of the torus, a genus 1 surface which is thus not conformally equivalent to the plane. This geometry appears quite frequently in lattice calculations for systems with periodic boundary conditions, and is also very natural from the point of view of string theory. We will discuss briefly in these notes the main results obtained so far in this simple case

  20. Assessment of ASSERT-PV for prediction of critical heat flux in CANDU bundles

    International Nuclear Information System (INIS)

    Rao, Y.F.; Cheng, Z.; Waddington, G.M.

    2014-01-01

    Highlights: • Assessment of the new Canadian subchannel code ASSERT-PV 3.2 for CHF prediction. • CANDU 28-, 37- and 43-element bundle CHF experiments. • Prediction improvement of ASSERT-PV 3.2 over previous code versions. • Sensitivity study of the effect of CHF model options. - Abstract: Atomic Energy of Canada Limited (AECL) has developed the subchannel thermalhydraulics code ASSERT-PV for the Canadian nuclear industry. The recently released ASSERT-PV 3.2 provides enhanced models for improved predictions of flow distribution, critical heat flux (CHF), and post-dryout (PDO) heat transfer in horizontal CANDU fuel channels. This paper presents results of an assessment of the new code version against five full-scale CANDU bundle experiments conducted in 1990s and in 2009 by Stern Laboratories (SL), using 28-, 37- and 43-element (CANFLEX) bundles. A total of 15 CHF test series with varying pressure-tube creep and/or bearing-pad height were analyzed. The SL experiments encompassed the bundle geometries and range of flow conditions for the intended ASSERT-PV applications for CANDU reactors. Code predictions of channel dryout power and axial and radial CHF locations were compared against measurements from the SL CHF tests to quantify the code prediction accuracy. The prediction statistics using the recommended model set of ASSERT-PV 3.2 were compared to those from previous code versions. Furthermore, the sensitivity studies evaluated the contribution of each CHF model change or enhancement to the improvement in CHF prediction. Overall, the assessment demonstrated significant improvement in prediction of channel dryout power and axial and radial CHF locations in horizontal fuel channels containing CANDU bundles

  1. Saturated properties prediction in critical region by a quartic ...

    African Journals Online (AJOL)

    A diverse substance library containing extensive PVT data for 77 pure components was used to critically evaluate the performance of a quartic equation of state and other four famous cubic equations of state in critical region. The quartic EOS studied in this work was found to significantly superior to the others in both vapor ...

  2. Critical phases in the raise and peel model

    Science.gov (United States)

    Jara, D. A. C.; Alcaraz, F. C.

    2018-05-01

    The raise and peel model (RPM) is a nonlocal stochastic model describing the space and time fluctuations of an evolving one dimensional interface. Its relevant parameter u is the ratio between the rates of local adsorption and nonlocal desorption processes (avalanches) The model at u  =  1 is the first example of a conformally invariant stochastic model. For small values u    u 0 it is critical. Although previous studies indicate that u 0  =  1, a determination of u 0 with a reasonable precision is still missing. By calculating numerically the structure function of the height profiles in the reciprocal space we confirm with good precision that indeed u 0  =  1. We establish that at the conformal invariant point u  =  1 the RPM has a roughening transition with dynamical and roughness critical exponents z  =  1 and , respectively. For u  >  1 the model is critical with a u-dependent dynamical critical exponent that tends towards zero as . However at 1/u  =  0 the RPM is exactly mapped into the totally asymmetric exclusion problem. This last model is known to be noncritical (critical) for open (periodic) boundary conditions. Our numerical studies indicate that the RPM as , due to its nonlocal dynamical processes, has the same large-distance physics no matter what boundary condition we chose. For u  >  1, our numerical analysis shows that in contrast to previous predictions, the region is composed of two distinct critical phases. For the height profiles are rough (), and for the height profiles are flat at large distances (). We also observed that in both critical phases (u  >  1) the RPM at short length scales, has an effective behavior in the Kardar–Parisi–Zhang critical universality class, that is not the true behavior of the system at large length scales.

  3. Model of designating the critical damages

    Directory of Open Access Journals (Sweden)

    Zwolińska Bożena

    2017-06-01

    Full Text Available The article consists of two parts which make for an integral body. This article depicts the method of designating the critical damages in accordance with lean maintenance method. Author considered exemplary production system (serial-parallel in which in time Δt appeared a damage on three different objects. Article depicts the mathematical model which enables determination of an indicator called “prioritized digit of the device”. In the developed model there were considered some parameters: production abilities of devices, existence of potential vicarious devices, position of damage in the production stream based on the capacity of operational buffers, time needed to remove the damages and influence of damages to the finalization of customers’ orders – CEF indicator.

  4. Evaluation of Accuracy of Calculational Prediction of Criticality Based on ICSBEP Handbook Experiments

    International Nuclear Information System (INIS)

    Golovko, Yury; Rozhikhin, Yevgeniy; Tsibulya, Anatoly; Koscheev, Vladimir

    2008-01-01

    Experiments with plutonium, low enriched uranium and uranium-233 from the ICSBEP Handbook are being considered in this paper. Among these experiments it was selected only those, which seem to be the most relevant to the evaluation of uncertainty of critical mass of mixtures of plutonium or low enriched uranium or uranium-233 with light water. All selected experiments were examined and covariance matrices of criticality uncertainties were developed along with some uncertainties were revised. Statistical analysis of these experiments was performed and some contradictions were discovered and eliminated. Evaluation of accuracy of prediction of criticality calculations was performed using the internally consistent set of experiments with plutonium, low enriched uranium and uranium-233 remained after the statistical analyses. The application objects for the evaluation of calculational prediction of criticality were water-reflected spherical systems of homogeneous aqueous mixtures of plutonium or low enriched uranium or uranium-233 of different concentrations which are simplified models of apparatus of external fuel cycle. It is shows that the procedure allows to considerably reduce uncertainty in k eff caused by the uncertainties in neutron cross-sections. Also it is shows that the results are practically independent of initial covariance matrices of nuclear data uncertainties. (authors)

  5. Prediction of critical illness in elderly outpatients using elder risk assessment: a population-based study

    Directory of Open Access Journals (Sweden)

    Biehl M

    2016-06-01

    receiver operating characteristic curve was 0.75, which indicated good discrimination. Conclusion: A simple model based on easily obtainable administrative data predicted critical illness in the next 2 years in elderly outpatients with up to 14% of the highest risk population suffering from critical illness. This model can facilitate efficient enrollment of patients into clinical programs such as care transition programs and studies aimed at the prevention of critical illness. It also can serve as a reminder to initiate advance care planning for high-risk elderly patients. External validation of this tool in different populations may enhance its generalizability. Keywords: aged, prognostication, critical care, mortality, elder risk assessment

  6. Critical assessment of methods of protein structure prediction (CASP) - round x

    KAUST Repository

    Moult, John; Fidelis, Krzysztof; Kryshtafovych, Andriy; Schwede, Torsten; Tramontano, Anna

    2013-01-01

    This article is an introduction to the special issue of the journal PROTEINS, dedicated to the tenth Critical Assessment of Structure Prediction (CASP) experiment to assess the state of the art in protein structure modeling. The article describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. The 10 CASP experiments span almost 20 years of progress in the field of protein structure modeling, and there have been enormous advances in methods and model accuracy in that period. Notable in this round is the first sustained improvement of models with refinement methods, using molecular dynamics. For the first time, we tested the ability of modeling methods to make use of sparse experimental three-dimensional contact information, such as may be obtained from new experimental techniques, with encouraging results. On the other hand, new contact prediction methods, though holding considerable promise, have yet to make an impact in CASP testing. The nature of CASP targets has been changing in recent CASPs, reflecting shifts in experimental structural biology, with more irregular structures, more multi-domain and multi-subunit structures, and less standard versions of known folds. When allowance is made for these factors, we continue to see steady progress in the overall accuracy of models, particularly resulting from improvement of non-template regions.

  7. Critical assessment of methods of protein structure prediction (CASP) - round x

    KAUST Repository

    Moult, John

    2013-12-17

    This article is an introduction to the special issue of the journal PROTEINS, dedicated to the tenth Critical Assessment of Structure Prediction (CASP) experiment to assess the state of the art in protein structure modeling. The article describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. The 10 CASP experiments span almost 20 years of progress in the field of protein structure modeling, and there have been enormous advances in methods and model accuracy in that period. Notable in this round is the first sustained improvement of models with refinement methods, using molecular dynamics. For the first time, we tested the ability of modeling methods to make use of sparse experimental three-dimensional contact information, such as may be obtained from new experimental techniques, with encouraging results. On the other hand, new contact prediction methods, though holding considerable promise, have yet to make an impact in CASP testing. The nature of CASP targets has been changing in recent CASPs, reflecting shifts in experimental structural biology, with more irregular structures, more multi-domain and multi-subunit structures, and less standard versions of known folds. When allowance is made for these factors, we continue to see steady progress in the overall accuracy of models, particularly resulting from improvement of non-template regions.

  8. Model predictive control using fuzzy decision functions

    NARCIS (Netherlands)

    Kaymak, U.; Costa Sousa, da J.M.

    2001-01-01

    Fuzzy predictive control integrates conventional model predictive control with techniques from fuzzy multicriteria decision making, translating the goals and the constraints to predictive control in a transparent way. The information regarding the (fuzzy) goals and the (fuzzy) constraints of the

  9. A state-of-the-art report on two-phase critical flow modelling

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jae Joon; Jang, Won Pyo; Kim, Dong Soo [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1993-09-01

    This report reviews and analyses two-phase, critical flow models. The purposes of the report are (1) to make a knowledge base for the full understanding and best-estimate of two-phase, critical flow, (2) to analyse the model development trend and to derive the direction of further studies. A wide range of critical flow models are reviewed. Each model, in general, predicts critical flow well only within specified conditions. The critical flow models of best-estimate codes are special process model included in the hydrodynamic model. The results of calculations depend on the nodalization, discharge coefficient, and other user`s options. The following topics are recommended for continuing studies: improvement of two-fluid model, development of multidimensional model, data base setup and model error evaluation, and generalization of discharge coefficients. 24 figs., 5 tabs., 80 refs. (Author).

  10. A state-of-the-art report on two-phase critical flow modelling

    International Nuclear Information System (INIS)

    Jung, Jae Joon; Jang, Won Pyo; Kim, Dong Soo

    1993-09-01

    This report reviews and analyses two-phase, critical flow models. The purposes of the report are (1) to make a knowledge base for the full understanding and best-estimate of two-phase, critical flow, (2) to analyse the model development trend and to derive the direction of further studies. A wide range of critical flow models are reviewed. Each model, in general, predicts critical flow well only within specified conditions. The critical flow models of best-estimate codes are special process model included in the hydrodynamic model. The results of calculations depend on the nodalization, discharge coefficient, and other user's options. The following topics are recommended for continuing studies: improvement of two-fluid model, development of multidimensional model, data base setup and model error evaluation, and generalization of discharge coefficients. 24 figs., 5 tabs., 80 refs. (Author)

  11. Model of designating the critical damages

    Directory of Open Access Journals (Sweden)

    Zwolińska Bożena

    2017-06-01

    Full Text Available Managing company in the lean way presumes no breakdowns nor reserves in the whole delivery chain. However, achieving such low indicators is impossible. That is why in some production plants it is extremely important to focus on preventive actions which can limit damages. This article depicts the method of designating the critical damages in accordance with lean maintenance method. The article consists of two parts which make for an integral body. Part one depicts the characteristic of a realistic object, it also contains productions capabilities analysis of certain areas within the production structure. Part two depicts the probabilistic model of shaping maximal time loss basing on emptying and filling interoperational buffers.

  12. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Directory of Open Access Journals (Sweden)

    Sven Van Poucke

    Full Text Available With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension. Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM, the ETL process (Extract, Transform, Load was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  13. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  14. Modeling Resource Hotspots: Critical Linkages and Processes

    Science.gov (United States)

    Daher, B.; Mohtar, R.; Pistikopoulos, E.; McCarl, B. A.; Yang, Y.

    2017-12-01

    Growing demands for interconnected resources emerge in the form of hotspots of varying characteristics. The business as usual allocation model cannot address the current, let alone anticipated, complex and highly interconnected resource challenges we face. A new paradigm for resource allocation must be adopted: one that identifies cross-sectoral synergies and, that moves away from silos to recognition of the nexus and integration of it. Doing so will result in new opportunities for business growth, economic development, and improved social well-being. Solutions and interventions must be multi-faceted; opportunities should be identified with holistic trade-offs in mind. No single solution fits all: different hotspots will require distinct interventions. Hotspots have varying resource constraints, stakeholders, goals and targets. The San Antonio region represents a complex resource hotspot with promising potential: its rapidly growing population, the Eagle Ford shale play, and the major agricultural activity there makes it a hotspot with many competing demands. Stakeholders need tools to allow them to knowledgeably address impending resource challenges. This study will identify contemporary WEF nexus questions and critical system interlinkages that will inform the modeling of the tightly interconnected resource systems and stresses using the San Antonio Region as a base; it will conceptualize a WEF nexus modeling framework, and develop assessment criteria to inform integrative planning and decision making.

  15. Prediction of Chemical Function: Model Development and ...

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  16. Prediction Approach of Critical Node Based on Multiple Attribute Decision Making for Opportunistic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Qifan Chen

    2016-01-01

    Full Text Available Predicting critical nodes of Opportunistic Sensor Network (OSN can help us not only to improve network performance but also to decrease the cost in network maintenance. However, existing ways of predicting critical nodes in static network are not suitable for OSN. In this paper, the conceptions of critical nodes, region contribution, and cut-vertex in multiregion OSN are defined. We propose an approach to predict critical node for OSN, which is based on multiple attribute decision making (MADM. It takes RC to present the dependence of regions on Ferry nodes. TOPSIS algorithm is employed to find out Ferry node with maximum comprehensive contribution, which is a critical node. The experimental results show that, in different scenarios, this approach can predict the critical nodes of OSN better.

  17. Prediction of the Critical Curvature for LX-17 with the Time of Arrival Data from DNS

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Jin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fried, Laurence E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moss, William C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-01-10

    We extract the detonation shock front velocity, curvature and acceleration from time of arrival data measured at grid points from direct numerical simulations of a 50mm rate-stick lit by a disk-source, with the ignition and growth reaction model and a JWL equation of state calibrated for LX-17. We compute the quasi-steady (D, κ) relation based on the extracted properties and predicted the critical curvatures of LX-17. We also proposed an explicit formula that contains the failure turning point, obtained from optimization for the (D, κ) relation of LX-17.

  18. Critical manifold of the kagome-lattice Potts model

    International Nuclear Information System (INIS)

    Jacobsen, Jesper Lykke; Scullard, Christian R

    2012-01-01

    Any two-dimensional infinite regular lattice G can be produced by tiling the plane with a finite subgraph B⊆G; we call B a basis of G. We introduce a two-parameter graph polynomial P B (q, v) that depends on B and its embedding in G. The algebraic curve P B (q, v) = 0 is shown to provide an approximation to the critical manifold of the q-state Potts model, with coupling v = e K − 1, defined on G. This curve predicts the phase diagram not only in the physical ferromagnetic regime (v > 0), but also in the antiferromagnetic (v B (q, v) = 0 provides the exact critical manifold in the limit of infinite B. Furthermore, for some lattices G—or for the Ising model (q = 2) on any G—the polynomial P B (q, v) factorizes for any choice of B: the zero set of the recurrent factor then provides the exact critical manifold. In this sense, the computation of P B (q, v) can be used to detect exact solvability of the Potts model on G. We illustrate the method for two choices of G: the square lattice, where the Potts model has been exactly solved, and the kagome lattice, where it has not. For the square lattice we correctly reproduce the known phase diagram, including the antiferromagnetic transition and the singularities in the Berker–Kadanoff phase at certain Beraha numbers. For the kagome lattice, taking the smallest basis with six edges we recover a well-known (but now refuted) conjecture of F Y Wu. Larger bases provide successive improvements on this formula, giving a natural extension of Wu’s approach. We perform large-scale numerical computations for comparison and find excellent agreement with the polynomial predictions. For v > 0 the accuracy of the predicted critical coupling v c is of the order 10 −4 or 10 −5 for the six-edge basis, and improves to 10 −6 or 10 −7 for the largest basis studied (with 36 edges). This article is part of ‘Lattice models and integrability’, a special issue of Journal of Physics A: Mathematical and Theoretical in honour of

  19. Saturated properties prediction in critical region by a quartic equation of state

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2011-08-01

    Full Text Available A diverse substance library containing extensive PVT data for 77 pure components was used to critically evaluate the performance of a quartic equation of state and other four famous cubic equations of state in critical region. The quartic EOS studied in this work was found to significantly superior to the others in both vapor pressure prediction and saturated volume prediction in vicinity of critical point.

  20. A Dynamic Hydrology-Critical Zone Framework for Rainfall-triggered Landslide Hazard Prediction

    Science.gov (United States)

    Dialynas, Y. G.; Foufoula-Georgiou, E.; Dietrich, W. E.; Bras, R. L.

    2017-12-01

    Watershed-scale coupled hydrologic-stability models are still in their early stages, and are characterized by important limitations: (a) either they assume steady-state or quasi-dynamic watershed hydrology, or (b) they simulate landslide occurrence based on a simple one-dimensional stability criterion. Here we develop a three-dimensional landslide prediction framework, based on a coupled hydrologic-slope stability model and incorporation of the influence of deep critical zone processes (i.e., flow through weathered bedrock and exfiltration to the colluvium) for more accurate prediction of the timing, location, and extent of landslides. Specifically, a watershed-scale slope stability model that systematically accounts for the contribution of driving and resisting forces in three-dimensional hillslope segments was coupled with a spatially-explicit and physically-based hydrologic model. The landslide prediction framework considers critical zone processes and structure, and explicitly accounts for the spatial heterogeneity of surface and subsurface properties that control slope stability, including soil and weathered bedrock hydrological and mechanical characteristics, vegetation, and slope morphology. To test performance, the model was applied in landslide-prone sites in the US, the hydrology of which has been extensively studied. Results showed that both rainfall infiltration in the soil and groundwater exfiltration exert a strong control on the timing and magnitude of landslide occurrence. We demonstrate the extent to which three-dimensional slope destabilizing factors, which are modulated by dynamic hydrologic conditions in the soil-bedrock column, control landslide initiation at the watershed scale.

  1. Prediction of chronic critical illness in a general intensive care unit

    Directory of Open Access Journals (Sweden)

    Sérgio H. Loss

    2013-06-01

    Full Text Available OBJECTIVE: To assess the incidence, costs, and mortality associated with chronic critical illness (CCI, and to identify clinical predictors of CCI in a general intensive care unit. METHODS: This was a prospective observational cohort study. All patients receiving supportive treatment for over 20 days were considered chronically critically ill and eligible for the study. After applying the exclusion criteria, 453 patients were analyzed. RESULTS: There was an 11% incidence of CCI. Total length of hospital stay, costs, and mortality were significantly higher among patients with CCI. Mechanical ventilation, sepsis, Glasgow score < 15, inadequate calorie intake, and higher body mass index were independent predictors for cci in the multivariate logistic regression model. CONCLUSIONS: CCI affects a distinctive population in intensive care units with higher mortality, costs, and prolonged hospitalization. Factors identifiable at the time of admission or during the first week in the intensive care unit can be used to predict CCI.

  2. A critical review of lexical analysis and Big Five model

    Directory of Open Access Journals (Sweden)

    María Cristina Richaud de Minzi

    2002-06-01

    Full Text Available In the last years the idea has resurfaced that traits can be measured in a reliable and valid and this can be useful inthe prediction of human behavior. The five-factor model appears to represent a conceptual and empirical advances in the field of personality theory. Necessary orthogonal factors (Goldberg, 1992, p. 26 to show the relationships between the descriptors of the features in English is five, and its nature can be summarized through the broad concepts of Surgency, Agreeableness, Responsibility, Emotional Stability versus neuroticism and openness to experience (John, 1990, p96 Furthermore, despite the criticisms that have been given to the model, represents a breakthrough in the field of personality assessment. This approach means a contribution to the study of personality, without being the integrative model of personality.

  3. Teaching for Art Criticism: Incorporating Feldman's Critical Analysis Learning Model in Students' Studio Practice

    Science.gov (United States)

    Subramaniam, Maithreyi; Hanafi, Jaffri; Putih, Abu Talib

    2016-01-01

    This study adopted 30 first year graphic design students' artwork, with critical analysis using Feldman's model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students' performances in their critical ability.…

  4. Homogeneous non-equilibrium two-phase critical flow model

    International Nuclear Information System (INIS)

    Schroeder, J.J.; Vuxuan, N.

    1987-01-01

    An important aspect of nuclear and chemical reactor safety is the ability to predict the maximum or critical mass flow rate from a break or leak in a pipe system. At the beginning of such a blowdown, if the stagnation condition of the fluid is subcooled or slightly saturated thermodynamic non-equilibrium exists in the downstream, e.g. the fluid becomes superheated to a degree determined by the liquid pressure. A simplified non-equilibrium model, explained in this report, is valid for rapidly decreasing pressure along the flow path. It presumes that fluid has to be superheated by an amount governed by physical principles before it starts to flash into steam. The flow is assumed to be homogeneous, i.e. the steam and liquid velocities are equal. An adiabatic flow calculation mode (Fanno lines) is employed to evaluate the critical flow rate for long pipes. The model is found to satisfactorily describe critical flow tests. Good agreement is obtained with the large scale Marviken tests as well as with small scale experiments. (orig.)

  5. A theoretical prediction of critical heat flux in saturated pool boiling during power transients

    International Nuclear Information System (INIS)

    Pasamehmetoglu, K.O.; Nelson, R.A.; Gunnerson, F.S.

    1987-01-01

    Understanding and predicting critical heat flux (CHF) behavior during steady-state and transient conditions is of fundamental interest in the design, operation, and safety of boiling and two-phase flow devices. Presented within this paper are the results of a comprehensive theoretical study specifically conducted to model transient CHF behavior in saturated pool boiling. Thermal energy conduction within a heating element and its influence on the CHF are also discussed. The resultant theory provides new insight into the basic physics of the CHF phenomenon and indicates favorable agreement with the experimental data from cylindrical heaters with small radii. However, the flat-ribbon heater data compared poorly with the present theory, although the general trend was predicted. Finally, various factors that affect the discrepency between the data and the theory are listed

  6. Teaching For Art Criticism: Incorporating Feldman’s Critical Analysis Learning Model In Students’ Studio Practice

    OpenAIRE

    Maithreyi Subramaniam; Jaffri Hanafi; Abu Talib Putih

    2016-01-01

    This study adopted 30 first year graphic design students’ artwork, with critical analysis using Feldman’s model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students’ performances in their critical ability. Pearson Correlation Coefficient was used to find out the correlation between students’ studio practice and art critical ability scores. The...

  7. Factors predicting labor induction success: a critical analysis.

    Science.gov (United States)

    Crane, Joan M G

    2006-09-01

    Because of the risk of failed induction of labor, a variety of maternal and fetal factors as well as screening tests have been suggested to predict labor induction success. Certain characteristics of the woman (including parity, age, weight, height and body mass index), and of the fetus (including birth weight and gestational age) are associated with the success of labor induction; with parous, young women who are taller and lower weight having a higher rate of induction success. Fetuses with a lower birth weight or increased gestational age are also associated with increased induction success. The condition of the cervix at the start of induction is an important predictor, with the modified Bishop score being a widely used scoring system. The most important element of the Bishop score is dilatation. Other predictors, including transvaginal ultrasound (TVUS) and biochemical markers [including fetal fibronectin (fFN)] have been suggested. Meta-analyses of studies identified from MEDLINE, PubMed, and EMBASE and published from 1990 to October 2005 were performed evaluating the use of TVUS and fFN in predicting labor induction success in women at term with singleton gestations. Both TVUS and Bishop score predicted successful induction [likelihood ratio (LR)=1.82, 95% confidence interval (CI)=1.51-2.20 and LR=2.10, 95%CI=1.67-2.64, respectively]. As well, fFN and Bishop score predicted successful induction (LR=1.49, 95%CI=1.20-1.85, and LR=2.62, 95%CI=1.88-3.64, respectively). Although TVUS and fFN predicted successful labor induction, neither has been shown to be superior to Bishop score. Further research is needed to evaluate these potential predictors and insulin-like growth factor binding protein-1 (IGFBP-1), another potential biochemical marker.

  8. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  9. Prediction of safety critical software operational reliability from test reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1999-01-01

    It has been a critical issue to predict the safety critical software reliability in nuclear engineering area. For many years, many researches have focused on the quantification of software reliability and there have been many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. User's interest is however on the operational reliability rather than on the test reliability. The experiences show that the operational reliability is higher than the test reliability. With the assumption that the difference in reliability results from the change of environment, from testing to operation, testing environment factors comprising the aging factor and the coverage factor are developed in this paper and used to predict the ultimate operational reliability with the failure data in testing phase. It is by incorporating test environments applied beyond the operational profile into testing environment factors. The application results show that the proposed method can estimate the operational reliability accurately. (Author). 14 refs., 1 tab., 1 fig

  10. Prediction of Critical Heat Flux under Rolling Motion

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Jinseok; Lee, Yeongun; Park, Gooncherl [Seoul National Univ., Seoul (Korea, Republic of)

    2013-05-15

    The aim to this paper may be summarized as follows: identify the flow regime compare with existing void-quality relationship and void fraction at OAF derived from the vapor superficial velocity obtained by the churn-to annular flow criterion, develop and evaluate the correlation for accurate prediction of CHF ratio under rolling motion. Experimentally measured CHF results from the previous study were not well-predicted by existing CHF correlations developed for wide range of pressure under rolling motion in vertical tube. Specifically, existing correlations do not account for the dynamic motion parameter, such as tangential and centrifugal force. This study reviewed some existing correlation and experimental studies related to reduction and enhancement of CHF and heat transfer and flow behavior under heaving and rolling motion, and developed a CHF ratio correlation for upward flow vertical tube under rolling motion. Based upon dimensionless groups, equations and interpolation factor, an empirical CHF correlation has been developed which is consistent with experimental data for uniformly heated tubes internally cooled by R-134 under rolling motion. Flow regime was determined through the prediction method for annular flow. Non-dimensional number and function were decided by CHF mechanism of each region. Interaction of LFD and DNB regions is taken into account by means of power interpolation which is reflected void fraction at OAF. The suggested correlation predicted the CHF Ratio with reasonable accuracy, showing an average error of -0.59 and 2.51% for RMS. Rolling motion can affect bubble motion and liquid film behavior complexly by combination of tangential and centrifugal forces and mass flow than heaving motion. Through a search of literature and a comparison of previous CHF ratio results, this work can contribute to the study of boiling heat transfer and CHF for the purpose of enhancement or reduction the CHF of dynamic motion system, such as marine reactor.

  11. System for prediction and determination of the sub critic multiplication

    International Nuclear Information System (INIS)

    Martinez, Aquilino S.; Pereira, Valmir; Silva, Fernando C. da

    1997-01-01

    It is presented a concept of a system which may be used to calculate and anticipate the subcritical multiplication of a PWR nuclear power plant. The system is divided into two different modules. The first module allows the theoretical prediction of the subcritical multiplication factor through the solution of the multigroup diffusion equation. The second module determines this factor based on the data acquired from the neutron detectors of a NPP external nuclear detection system. (author). 3 refs., 3 figs., 2 tabs

  12. Critical thinking in clinical nurse education: application of Paul's model of critical thinking.

    Science.gov (United States)

    Andrea Sullivan, E

    2012-11-01

    Nurse educators recognize that many nursing students have difficulty in making decisions in clinical practice. The ability to make effective, informed decisions in clinical practice requires that nursing students know and apply the processes of critical thinking. Critical thinking is a skill that develops over time and requires the conscious application of this process. There are a number of models in the nursing literature to assist students in the critical thinking process; however, these models tend to focus solely on decision making in hospital settings and are often complex to actualize. In this paper, Paul's Model of Critical Thinking is examined for its application to nursing education. I will demonstrate how the model can be used by clinical nurse educators to assist students to develop critical thinking skills in all health care settings in a way that makes critical thinking skills accessible to students. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  14. Using a Prediction Model to Manage Cyber Security Threats

    Directory of Open Access Journals (Sweden)

    Venkatesh Jaganathan

    2015-01-01

    Full Text Available Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  15. Using a Prediction Model to Manage Cyber Security Threats.

    Science.gov (United States)

    Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  16. Using a Prediction Model to Manage Cyber Security Threats

    Science.gov (United States)

    Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization. PMID:26065024

  17. Simple Model for Identifying Critical Regions in Atrial Fibrillation

    Science.gov (United States)

    Christensen, Kim; Manani, Kishan A.; Peters, Nicholas S.

    2015-01-01

    Atrial fibrillation (AF) is the most common abnormal heart rhythm and the single biggest cause of stroke. Ablation, destroying regions of the atria, is applied largely empirically and can be curative but with a disappointing clinical success rate. We design a simple model of activation wave front propagation on an anisotropic structure mimicking the branching network of heart muscle cells. This integration of phenomenological dynamics and pertinent structure shows how AF emerges spontaneously when the transverse cell-to-cell coupling decreases, as occurs with age, beyond a threshold value. We identify critical regions responsible for the initiation and maintenance of AF, the ablation of which terminates AF. The simplicity of the model allows us to calculate analytically the risk of arrhythmia and express the threshold value of transversal cell-to-cell coupling as a function of the model parameters. This threshold value decreases with increasing refractory period by reducing the number of critical regions which can initiate and sustain microreentrant circuits. These biologically testable predictions might inform ablation therapies and arrhythmic risk assessment.

  18. Pulsatile fluidic pump demonstration and predictive model application

    International Nuclear Information System (INIS)

    Morgan, J.G.; Holland, W.D.

    1986-04-01

    Pulsatile fluidic pumps were developed as a remotely controlled method of transferring or mixing feed solutions. A test in the Integrated Equipment Test facility demonstrated the performance of a critically safe geometry pump suitable for use in a 0.1-ton/d heavy metal (HM) fuel reprocessing plant. A predictive model was developed to calculate output flows under a wide range of external system conditions. Predictive and experimental flow rates are compared for both submerged and unsubmerged fluidic pump cases

  19. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  20. A theoretical prediction of critical heat flux in subcooled pool boiling during power transients

    International Nuclear Information System (INIS)

    Pasamehmetoglu, K.O.; Nelson, R.A.; Gunnerson, F.S.

    1988-01-01

    Understanding and predicting critical heat flux (CHF) behavior during steady-state and transient conditions are of fundamenatal interest in the design, operation, safety of boiling and two-phase flow devices. This paper discusses the results of a comprehensive theoretical study made specifically to model transient CHF behavior in subcooled pool boiling. This study is based upon a simplified steady-state CHF model in terms of the vapor mass growth period. The results obtained from this theory indicate favorable agreement with the experimental data from cylindrical heaters with small radii. The statistical nature of the vapor mass behavior in transient boiling also is considered and upper and lower limits for the current theory are established. Various factors that affect the discrepancy between the data and the theory are discussed

  1. Particle swarm optimization-based least squares support vector regression for critical heat flux prediction

    International Nuclear Information System (INIS)

    Jiang, B.T.; Zhao, F.Y.

    2013-01-01

    Highlights: ► CHF data are collected from the published literature. ► Less training data are used to train the LSSVR model. ► PSO is adopted to optimize the key parameters to improve the model precision. ► The reliability of LSSVR is proved through parametric trends analysis. - Abstract: In view of practical importance of critical heat flux (CHF) for design and safety of nuclear reactors, accurate prediction of CHF is of utmost significance. This paper presents a novel approach using least squares support vector regression (LSSVR) and particle swarm optimization (PSO) to predict CHF. Two available published datasets are used to train and test the proposed algorithm, in which PSO is employed to search for the best parameters involved in LSSVR model. The CHF values obtained by the LSSVR model are compared with the corresponding experimental values and those of a previous method, adaptive neuro fuzzy inference system (ANFIS). This comparison is also carried out in the investigation of parametric trends of CHF. It is found that the proposed method can achieve the desired performance and yields a more satisfactory fit with experimental results than ANFIS. Therefore, LSSVR method is likely to be suitable for other parameters processing such as CHF

  2. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  3. Predictive maintenance of critical equipment in industrial processes

    Science.gov (United States)

    Hashemian, Hashem M.

    This dissertation is an account of present and past research and development (R&D) efforts conducted by the author to develop and implement new technology for predictive maintenance and equipment condition monitoring in industrial processes. In particular, this dissertation presents the design of an integrated condition-monitoring system that incorporates the results of three current R&D projects with a combined funding of $2.8 million awarded to the author by the U.S. Department of Energy (DOE). This system will improve the state of the art in equipment condition monitoring and has applications in numerous industries including chemical and petrochemical plants, aviation and aerospace, electric power production and distribution, and a variety of manufacturing processes. The work that is presented in this dissertation is unique in that it introduces a new class of condition-monitoring methods that depend predominantly on the normal output of existing process sensors. It also describes current R&D efforts to develop data acquisition systems and data analysis algorithms and software packages that use the output of these sensors to determine the condition and health of industrial processes and their equipment. For example, the output of a pressure sensor in an operating plant can be used not only to indicate the pressure, but also to verify the calibration and response time of the sensor itself and identify anomalies in the process such as blockages, voids, and leaks that can interfere with accurate measurement of process parameters or disturb the plant's operation, safety, or reliability. Today, process data are typically collected at a rate of one sample per second (1 Hz) or slower. If this sampling rate is increased to 100 samples per second or higher, much more information can be extracted from the normal output of a process sensor and then used for condition monitoring, equipment performance measurements, and predictive maintenance. A fast analog-to-digital (A

  4. Modeling the prediction of business intelligence system effectiveness.

    Science.gov (United States)

    Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I

    2016-01-01

    Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important.

  5. Prediction of Oil Critical Rate in Vertical Wells using Meyer-Gardner ...

    African Journals Online (AJOL)

    PROF HORSFALL

    2018-04-14

    Apr 14, 2018 ... Department of Petroleum and Gas Engineering, Faculty of Engineering, Delta State University, Abraka, Delta State, ..... impermeable barrier, extending radially from the ... useful aid to field engineers for predicting critical rate.

  6. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  7. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  8. Predictive user modeling with actionable attributes

    NARCIS (Netherlands)

    Zliobaite, I.; Pechenizkiy, M.

    2013-01-01

    Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target

  9. Teaching For Art Criticism: Incorporating Feldman’s Critical Analysis Learning Model In Students’ Studio Practice

    Directory of Open Access Journals (Sweden)

    Maithreyi Subramaniam

    2016-01-01

    Full Text Available This study adopted 30 first year graphic design students’ artwork, with critical analysis using Feldman’s model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students’ performances in their critical ability. Pearson Correlation Coefficient was used to find out the correlation between students’ studio practice and art critical ability scores. The findings showed most students performed slightly better than average in the critical analyses and performed best in selecting analysis among the four dimensions assessed. In the context of the students’ studio practice and critical ability, findings showed there are some connections between the students’ art critical ability and studio practice.

  10. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  11. A quenched c = 1 critical matrix model

    International Nuclear Information System (INIS)

    Qiu, Zongan; Rey, Soo-Jong.

    1990-12-01

    We study a variant of the Penner-Distler-Vafa model, proposed as a c = 1 quantum gravity: 'quenched' matrix model with logarithmic potential. The model is exactly soluble, and exhibits a two-cut branching as observed in multicritical unitary matrix models and multicut Hermitian matrix models. Using analytic continuation of the power in the conventional polynomial potential, we also show that both the Penner-Distler-Vafa model and our 'quenched' matrix model satisfy Virasoro algebra constraints

  12. Robust predictions of the interacting boson model

    International Nuclear Information System (INIS)

    Casten, R.F.; Koeln Univ.

    1994-01-01

    While most recognized for its symmetries and algebraic structure, the IBA model has other less-well-known but equally intrinsic properties which give unavoidable, parameter-free predictions. These predictions concern central aspects of low-energy nuclear collective structure. This paper outlines these ''robust'' predictions and compares them with the data

  13. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  14. Evaluations of the CCFL and critical flow models in TRACE for PWR LBLOCA analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jung-Hua; Lin, Hao Tzu [National Tsing Hua Univ., HsinChu, Taiwan (China). Dept. of Engineering and System Science; Wang, Jong-Rong [Atomic Energy Council, Taoyuan County, Taiwan (China). Inst. of Nuclear Energy Research; Shih, Chunkuan [National Tsing Hua Univ., HsinChu, Taiwan (China). Inst. of Nuclear Engineering and Science

    2012-12-15

    This study aims to develop the Maanshan Pressurized Water Reactor (PWR) analysis model by using the TRACE (TRAC/RELAP Advanced Computational Engine) code. By analyzing the Large Break Loss of Coolant Accident (LBLOCA) sequence, the results are compared with the Maanshan Final Safety Analysis Report (FSAR) data. The critical flow and Counter Current Flow Limitation (CCFL) play an important role in the overall performance of TRACE LBLOCA prediction. Therefore, the sensitivity study on the discharge coefficients of critical flow model and CCFL modeling among different regions are also discussed. The current conclusions show that modeling CCFL in downcomer has more significant impact on the peak cladding temperature than modeling CCFL in hot-legs does. No CCFL phenomena occurred in the pressurizer surge line. The best value for the multipliers of critical flow model would be 0.5 and the TRACE could consistently predict the break flow rate in the LBLOCA analysis as shown in FSAR. (orig.)

  15. A Critical Analysis and Validation of the Accuracy of Wave Overtopping Prediction Formulae for OWECs

    Directory of Open Access Journals (Sweden)

    David Gallach-Sánchez

    2018-01-01

    Full Text Available The development of wave energy devices is growing in recent years. One type of device is the overtopping wave energy converter (OWEC, for which the knowledge of the wave overtopping rates is a basic and crucial aspect in their design. In particular, the most interesting range to study is for OWECs with steep slopes to vertical walls, and with very small freeboards and zero freeboards where the overtopping rate is maximized, and which can be generalized as steep low-crested structures. Recently, wave overtopping prediction formulae have been published for this type of structures, although their accuracy has not been fully assessed, as the overtopping data available in this range is scarce. We performed a critical analysis of the overtopping prediction formulae for steep low-crested structures and the validation of the accuracy of these formulae, based on new overtopping data for steep low-crested structures obtained at Ghent University. This paper summarizes the existing knowledge about average wave overtopping, describes the physical model tests performed, analyses the results and compares them to existing prediction formulae. The new dataset extends the wave overtopping data towards vertical walls and zero freeboard structures. In general, the new dataset validated the more recent overtopping formulae focused on steep slopes with small freeboards, although the formulae are underpredicting the average overtopping rates for very small and zero relative crest freeboards.

  16. Outcome evaluation of a new model of critical care orientation.

    Science.gov (United States)

    Morris, Linda L; Pfeifer, Pamela; Catalano, Rene; Fortney, Robert; Nelson, Greta; Rabito, Robb; Harap, Rebecca

    2009-05-01

    The shortage of critical care nurses and the service expansion of 2 intensive care units provided a unique opportunity to create a new model of critical care orientation. The goal was to design a program that assessed critical thinking, validated competence, and provided learning pathways that accommodated diverse experience. To determine the effect of a new model of critical care orientation on satisfaction, retention, turnover, vacancy, preparedness to manage patient care assignment, length of orientation, and cost of orientation. A prospective, quasi-experimental design with both quantitative and qualitative methods. The new model improved satisfaction scores, retention rates, and recruitment of critical care nurses. Length of orientation was unchanged. Cost was increased, primarily because a full-time education consultant was added. A new model for nurse orientation that was focused on critical thinking and competence validation improved retention and satisfaction and serves as a template for orientation of nurses throughout the medical center.

  17. Questioning the Faith - Models and Prediction in Stream Restoration (Invited)

    Science.gov (United States)

    Wilcock, P.

    2013-12-01

    River management and restoration demand prediction at and beyond our present ability. Management questions, framed appropriately, can motivate fundamental advances in science, although the connection between research and application is not always easy, useful, or robust. Why is that? This presentation considers the connection between models and management, a connection that requires critical and creative thought on both sides. Essential challenges for managers include clearly defining project objectives and accommodating uncertainty in any model prediction. Essential challenges for the research community include matching the appropriate model to project duration, space, funding, information, and social constraints and clearly presenting answers that are actually useful to managers. Better models do not lead to better management decisions or better designs if the predictions are not relevant to and accepted by managers. In fact, any prediction may be irrelevant if the need for prediction is not recognized. The predictive target must be developed in an active dialog between managers and modelers. This relationship, like any other, can take time to develop. For example, large segments of stream restoration practice have remained resistant to models and prediction because the foundational tenet - that channels built to a certain template will be able to transport the supplied sediment with the available flow - has no essential physical connection between cause and effect. Stream restoration practice can be steered in a predictive direction in which project objectives are defined as predictable attributes and testable hypotheses. If stream restoration design is defined in terms of the desired performance of the channel (static or dynamic, sediment surplus or deficit), then channel properties that provide these attributes can be predicted and a basis exists for testing approximations, models, and predictions.

  18. Adverse Condition and Critical Event Prediction in Cranfield Multiphase Flow Facility

    DEFF Research Database (Denmark)

    Egedorf, Søren; Shaker, Hamid Reza

    2017-01-01

    , or even to the environment. To cope with these, adverse condition and critical event prediction plays an important role. Adverse Condition and Critical Event Prediction Toolbox (ACCEPT) is a tool which has been recently developed by NASA to allow for a timely prediction of an adverse event, with low false...... alarm and missed detection rates. While ACCEPT has shown to be an effective tool in some applications, its performance has not yet been evaluated on practical well-known benchmark examples. In this paper, ACCEPT is used for adverse condition and critical event prediction in a multiphase flow facility....... Cranfield multiphase flow facility is known to be an interesting benchmark which has been used to evaluate different methods from statistical process monitoring. In order to allow for the data from the flow facility to be used in ACCEPT, methods such as Kernel Density Estimation (KDE), PCA-and CVA...

  19. Extracting falsifiable predictions from sloppy models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  20. The prediction of epidemics through mathematical modeling.

    Science.gov (United States)

    Schaus, Catherine

    2014-01-01

    Mathematical models may be resorted to in an endeavor to predict the development of epidemics. The SIR model is one of the applications. Still too approximate, the use of statistics awaits more data in order to come closer to reality.

  1. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  2. Role of criticality models in ANSI standards for nuclear criticality safety

    International Nuclear Information System (INIS)

    Thomas, J.T.

    1976-01-01

    Two methods used in nuclear criticality safety evaluations in the area of neutron interaction among subcritical components of fissile materials are the solid angle and surface density techniques. The accuracy and use of these models are briefly discussed

  3. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  4. Driver's mental workload prediction model based on physiological indices.

    Science.gov (United States)

    Yan, Shengyuan; Tran, Cong Chi; Wei, Yingying; Habiyaremye, Jean Luc

    2017-09-15

    Developing an early warning model to predict the driver's mental workload (MWL) is critical and helpful, especially for new or less experienced drivers. The present study aims to investigate the correlation between new drivers' MWL and their work performance, regarding the number of errors. Additionally, the group method of data handling is used to establish the driver's MWL predictive model based on subjective rating (NASA task load index [NASA-TLX]) and six physiological indices. The results indicate that the NASA-TLX and the number of errors are positively correlated, and the predictive model shows the validity of the proposed model with an R 2 value of 0.745. The proposed model is expected to provide a reference value for the new drivers of their MWL by providing the physiological indices, and the driving lesson plans can be proposed to sustain an appropriate MWL as well as improve the driver's work performance.

  5. Prediction of the critical heat flux for saturated upward flow boiling water in vertical narrow rectangular channels

    International Nuclear Information System (INIS)

    Choi, Gil Sik; Chang, Soon Heung; Jeong, Yong Hoon

    2016-01-01

    A study, on the theoretical method to predict the critical heat flux (CHF) of saturated upward flow boiling water in vertical narrow rectangular channels, has been conducted. For the assessment of this CHF prediction method, 608 experimental data were selected from the previous researches, in which the heated sections were uniformly heated from both wide surfaces under the high pressure condition over 41 bar. For this purpose, representative previous liquid film dryout (LFD) models for circular channels were reviewed by using 6058 points from the KAIST CHF data bank. This shows that it is reasonable to define the initial condition of quality and entrainment fraction at onset of annular flow (OAF) as the transition to annular flow regime and the equilibrium value, respectively, and the prediction error of the LFD model is dependent on the accuracy of the constitutive equations of droplet deposition and entrainment. In the modified Levy model, the CHF data are predicted with standard deviation (SD) of 14.0% and root mean square error (RMSE) of 14.1%. Meanwhile, in the present LFD model, which is based on the constitutive equations developed by Okawa et al., the entire data are calculated with SD of 17.1% and RMSE of 17.3%. Because of its qualitative prediction trend and universal calculation convergence, the present model was finally selected as the best LFD model to predict the CHF for narrow rectangular channels. For the assessment of the present LFD model for narrow rectangular channels, effective 284 data were selected. By using the present LFD model, these data are predicted with RMSE of 22.9% with the dryout criterion of zero-liquid film flow, but RMSE of 18.7% with rivulet formation model. This shows that the prediction error of the present LFD model for narrow rectangular channels is similar with that for circular channels.

  6. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  7. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  8. Systems modeling and simulation applications for critical care medicine

    Science.gov (United States)

    2012-01-01

    Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area. PMID:22703718

  9. Systems modeling and simulation applications for critical care medicine.

    Science.gov (United States)

    Dong, Yue; Chbat, Nicolas W; Gupta, Ashish; Hadzikadic, Mirsad; Gajic, Ognjen

    2012-06-15

    Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area.

  10. Prediction of sodium critical heat flux (CHF) in annular channel using grey systems theory

    International Nuclear Information System (INIS)

    Zhou Tao; Su Guanghui; Zhang Weizhong; Qiu Suizheng; Jia Dounan

    2001-01-01

    Using grey systems theory and experimental data obtained from sodium boiling test loop in China, the grey mutual analysis of some parameters influencing sodium CHF is carried out, and the CHF values are predicted by GM(1, 1) model. The GM(1, h) model is established for CHF prediction, and the predicted CHF values are good agreement with the experimental data

  11. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  12. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...

  13. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  14. A Model for Critical Games Literacy

    Science.gov (United States)

    Apperley, Tom; Beavis, Catherine

    2013-01-01

    This article outlines a model for teaching both computer games and videogames in the classroom for teachers. The model illustrates the connections between in-game actions and youth gaming culture. The article explains how the out-of-school knowledge building, creation and collaboration that occurs in gaming and gaming culture has an impact on…

  15. Model for the resistive critical current transition in composite superconductors

    International Nuclear Information System (INIS)

    Warnes, W.H.

    1988-01-01

    Much of the research investigating technological type-II superconducting composites relies on the measurement of the resistive critical current transition. We have developed a model for the resistive transition which improves on older models by allowing for the very different nature of monofilamentary and multifilamentary composite structures. The monofilamentary model allows for axial current flow around critical current weak links in the superconducting filament. The multifilamentary model incorporates an additional radial current transfer between neighboring filaments. The development of both models is presented. It is shown that the models are useful for extracting more information from the experimental data than was formerly possible. Specific information obtainable from the experimental voltage-current characteristic includes the distribution of critical currents in the composite, the average critical current of the distribution, the range of critical currents in the composite, the field and temperature dependence of the distribution, and the fraction of the composite dissipating energy in flux flow at any current. This additional information about the distribution of critical currents may be helpful in leading toward a better understanding of flux pinning in technological superconductors. Comparison of the models with several experiments is given and shown to be in reasonable agreement. Implications of the models for the measurement of critical currents in technological composites is presented and discussed with reference to basic flux pinning studies in such composites

  16. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  17. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  18. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  19. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  20. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  1. Risk Prediction Models for Incident Heart Failure: A Systematic Review of Methodology and Model Performance.

    Science.gov (United States)

    Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M

    2017-09-01

    Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  3. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior...... and predictive inferences under different reasonable choices of prior distribution in sensitivity analysis have been presented....

  4. CRITICAL ANALYSIS OF EVALUATION MODEL LOMCE

    Directory of Open Access Journals (Sweden)

    José Luis Bernal Agudo

    2015-06-01

    Full Text Available The evaluation model that the LOMCE projects sinks its roots into the neoliberal beliefs, reflecting a specific way of understanding the world. What matters is not the process but the results, being the evaluation the center of the education-learning processes. It presents an evil planning, since the theory that justifies the model doesn’t specify upon coherent proposals, where there is an excessive worry for excellence and diversity is left out. A comprehensive way of understanding education should be recovered.

  5. Role of Personality Traits, Learning Styles and Metacognition in Predicting Critical Thinking of Undergraduate Students

    Directory of Open Access Journals (Sweden)

    Soliemanifar O

    2015-04-01

    The aim of this study was to investigate the role of personality traits, learning styles and metacognition in predicting critical thinking. Instrument & Methods: In this descriptive correlative study, 240 students (130 girls and 110 boys of Ahvaz Shahid Chamran University were selected by multi-stage random sampling method. The instruments for collecting data were NEO Five-Factor Inventory, learning style inventory of Kolb (LSI, metacognitive assessment inventory (MAI of Schraw & Dennison (1994 and California Critical Thinking Skills Test (CCTST. The data were analyzed using Pearson correlation coefficient, stepwise regression analysis and Canonical correlation analysis.  Findings: Openness to experiment (b=0.41, conscientiousness (b=0.28, abstract conceptualization (b=0.39, active experimentation (b=0.22, reflective observation (b=0.12, knowledge of cognition (b=0.47 and regulation of cognition (b=0.29 were effective in predicting critical thinking. Openness to experiment and conscientiousness (r2=0.25, active experimentation, abstract conceptualization and reflective observation learning styles (r2=0.21 and knowledge and regulation of cognition metacognitions (r2=0.3 had an important role in explaining critical thinking. The linear combination of critical thinking skills (evaluation, analysis, inference was predictable by a linear combination of dispositional-cognitive factors (openness, conscientiousness, abstract conceptualization, active experimentation, knowledge of cognition and regulation of cognition. Conclusion: Personality traits, learning styles and metacognition, as dispositional-cognitive factors, play a significant role in students' critical thinking.

  6. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  7. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  8. Causal Measurement Models: Can Criticism Stimulate Clarification?

    Science.gov (United States)

    Markus, Keith A.

    2016-01-01

    In their 2016 work, Aguirre-Urreta et al. provided a contribution to the literature on causal measurement models that enhances clarity and stimulates further thinking. Aguirre-Urreta et al. presented a form of statistical identity involving mapping onto the portion of the parameter space involving the nomological net, relationships between the…

  9. Prediction calculation of HTR-10 fuel loading for the first criticality

    International Nuclear Information System (INIS)

    Jing Xingqing; Yang Yongwei; Gu Yuxiang; Shan Wenzhi

    2001-01-01

    The 10 MW high temperature gas cooled reactor (HTR-10) was built at Institute of Nuclear Energy Technology, Tsinghua University, and the first criticality was attained in Dec. 2000. The high temperature gas cooled reactor physics simulation code VSOP was used for the prediction of the fuel loading for HTR-10 first criticality. The number of fuel element and graphite element was predicted to provide reference for the first criticality experiment. The prediction calculations toke into account the factors including the double heterogeneity of the fuel element, buckling feedback for the spectrum calculation, the effect of the mixture of the graphite and the fuel element, and the correction of the diffusion coefficients near the upper cavity based on the transport theory. The effects of impurities in the fuel and the graphite element in the core and those in the reflector graphite on the reactivity of the reactor were considered in detail. The first criticality experiment showed that the predicted values and the experiment results were in good agreement with little relative error less than 1%, which means the prediction was successful

  10. Multi-model analysis in hydrological prediction

    Science.gov (United States)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  11. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  12. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  13. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  14. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  15. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  16. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  3. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  4. Nonconvex model predictive control for commercial refrigeration

    Science.gov (United States)

    Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John

    2013-08-01

    We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.

  5. Spent fuel: prediction model development

    International Nuclear Information System (INIS)

    Almassy, M.Y.; Bosi, D.M.; Cantley, D.A.

    1979-07-01

    The need for spent fuel disposal performance modeling stems from a requirement to assess the risks involved with deep geologic disposal of spent fuel, and to support licensing and public acceptance of spent fuel repositories. Through the balanced program of analysis, diagnostic testing, and disposal demonstration tests, highlighted in this presentation, the goal of defining risks and of quantifying fuel performance during long-term disposal can be attained

  6. Navy Recruit Attrition Prediction Modeling

    Science.gov (United States)

    2014-09-01

    have high correlation with attrition, such as age, job characteristics, command climate, marital status, behavior issues prior to recruitment, and the...the additive model. glm(formula = Outcome ~ Age + Gender + Marital + AFQTCat + Pay + Ed + Dep, family = binomial, data = ltraining) Deviance ...0.1 ‘ ‘ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance : 105441 on 85221 degrees of freedom Residual deviance

  7. Predicting and Modeling RNA Architecture

    Science.gov (United States)

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  8. The U(1)-Higgs model: critical behaviour in the confining-Higgs region

    International Nuclear Information System (INIS)

    Alonso, J.L.; Azcoiti, V.; Campos, I.; Ciria, J.C.; Cruz, A.; Iniguez, D.; Lesmes, F.; Piedrafita, C.; Rivero, A.; Tarancon, A.; Badoni, D.; Fernandez, L.A.; Munoz Sudupe, A.; Ruiz-Lorenzo, J.J.; Gonzalez-Arroyo, A.; Martinez, P.; Pech, J.; Tellez, P.

    1993-01-01

    We study numerically the critical properties of the U(1)-Higgs lattice model, with fixed Higgs modulus, in the region of small gauge coupling where the Higgs and confining phases merge. We find evidence for a first-order transition line that ends in a second-order point. By means of a rotation in parameter space we introduce thermodynamic magnitudes and critical exponents in close resemblance with simple models that show analogous critical behaviour. The measured data allow us to fit the critical exponents finding values in agreement with the mean-field prediction. The location of the critical point and the slope of the first-order line are accurately measured. (orig.)

  9. Predictive Models and Computational Toxicology (II IBAMTOX)

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  10. Finding furfural hydrogenation catalysts via predictive modelling

    NARCIS (Netherlands)

    Strassberger, Z.; Mooijman, M.; Ruijter, E.; Alberts, A.H.; Maldonado, A.G.; Orru, R.V.A.; Rothenberg, G.

    2010-01-01

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes

  11. FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...

    African Journals Online (AJOL)

    FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL STRESSES IN ... the transverse residual stress in the x-direction (σx) had a maximum value of 375MPa ... the finite element method are in fair agreement with the experimental results.

  12. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico; Kryshtafovych, Andriy; Tramontano, Anna

    2009-01-01

    established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic

  13. A critical flow model for the Cathena thermalhydraulic code

    International Nuclear Information System (INIS)

    Popov, N.K.; Hanna, B.N.

    1990-01-01

    The calculation of critical flow rate, e.g., of choked flow through a break, is required for simulating a loss of coolant transient in a reactor or reactor-like experimental facility. A model was developed to calculate the flow rate through the break for given geometrical parameters near the break and fluid parameters upstream of the break for ordinary water, as well as heavy water, with or without non- condensible gases. This model has been incorporated in the CATHENA, one-dimensional, two-fluid thermalhydraulic code. In the CATHENA code a standard staggered-mesh, finite-difference representation is used to solve the thermalhydraulic equations. This model compares the fluid mixture velocity, calculated using the CATHENA momentum equations, with a critical velocity. When the mixture velocity is smaller than the critical velocity, the flow is assumed to be subcritical, and the model remains passive. When the fluid mixture velocity is higher than the critical velocity, the model sets the fluid mixture velocity equal to the critical velocity. In this paper the critical velocity at a link (momentum cell) is first estimated separately for single-phase liquid, two- phase, or single-phase gas flow condition at the upstream node (mass/energy cell). In all three regimes non-condensible gas can be present in the flow. For single-phase liquid flow, the critical velocity is estimated using a Bernoulli- type of equation, the pressure at the link is estimated by the pressure undershoot method

  14. Critical fluctuations in cortical models near instability

    Directory of Open Access Journals (Sweden)

    Matthew J. Aburn

    2012-08-01

    Full Text Available Computational studies often proceed from the premise that cortical dynamics operate in a linearly stable domain, where fluctuations dissipate quickly and show only short memory. Studies of human EEG, however, have shown significant autocorrelation at time lags on the scale of minutes, indicating the need to consider regimes where nonlinearities influence the dynamics. Statistical properties such as increased autocorrelation length, increased variance, power-law scaling and bistable switching have been suggested as generic indicators of the approach to bifurcation in nonlinear dynamical systems. We study temporal fluctuations in a widely-employed computational model (the Jansen-Rit model of cortical activity, examining the statistical signatures that accompany bifurcations. Approaching supercritical Hopf bifurcations through tuning of the background excitatory input, we find a dramatic increase in the autocorrelation length that depends sensitively on the direction in phase space of the input fluctuations and hence on which neuronal subpopulation is stochastically perturbed. Similar dependence on the input direction is found in the distribution of fluctuation size and duration, which show power law scaling that extends over four orders of magnitude at the Hopf bifurcation. We conjecture that the alignment in phase space between the input noise vector and the center manifold of the Hopf bifurcation is directly linked to these changes. These results are consistent with the possibility of statistical indicators of linear instability being detectable in real EEG time series. However, even in a simple cortical model, we find that these indicators may not necessarily be visible even when bifurcations are present because their expression can depend sensitively on the neuronal pathway of incoming fluctuations.

  15. Predicting soil acidification trends at Plynlimon using the SAFE model

    Directory of Open Access Journals (Sweden)

    B. Reynolds

    1997-01-01

    Full Text Available The SAFE model has been applied to an acid grassland site, located on base-poor stagnopodzol soils derived from Lower Palaeozoic greywackes. The model predicts that acidification of the soil has occurred in response to increased acid deposition following the industrial revolution. Limited recovery is predicted following the decline in sulphur deposition during the mid to late 1970s. Reducing excess sulphur and NOx deposition in 1998 to 40% and 70% of 1980 levels results in further recovery but soil chemical conditions (base saturation, soil water pH and ANC do not return to values predicted in pre-industrial times. The SAFE model predicts that critical loads (expressed in terms of the (Ca+Mg+K:Alcrit ratio for six vegetation species found in acid grassland communities are not exceeded despite the increase in deposited acidity following the industrial revolution. The relative growth response of selected vegetation species characteristic of acid grassland swards has been predicted using a damage function linking growth to soil solution base cation to aluminium ratio. The results show that very small growth reductions can be expected for 'acid tolerant' plants growing in acid upland soils. For more sensitive species such as Holcus lanatus, SAFE predicts that growth would have been reduced by about 20% between 1951 and 1983, when acid inputs were greatest. Recovery to c. 90% of normal growth (under laboratory conditions is predicted as acidic inputs decline.

  16. Modeling and prediction of flotation performance using support vector regression

    Directory of Open Access Journals (Sweden)

    Despotović Vladimir

    2017-01-01

    Full Text Available Continuous efforts have been made in recent year to improve the process of paper recycling, as it is of critical importance for saving the wood, water and energy resources. Flotation deinking is considered to be one of the key methods for separation of ink particles from the cellulose fibres. Attempts to model the flotation deinking process have often resulted in complex models that are difficult to implement and use. In this paper a model for prediction of flotation performance based on Support Vector Regression (SVR, is presented. Representative data samples were created in laboratory, under a variety of practical control variables for the flotation deinking process, including different reagents, pH values and flotation residence time. Predictive model was created that was trained on these data samples, and the flotation performance was assessed showing that Support Vector Regression is a promising method even when dataset used for training the model is limited.

  17. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  18. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  19. Comparison of the models of financial distress prediction

    Directory of Open Access Journals (Sweden)

    Jiří Omelka

    2013-01-01

    Full Text Available Prediction of the financial distress is generally supposed as approximation if a business entity is closed on bankruptcy or at least on serious financial problems. Financial distress is defined as such a situation when a company is not able to satisfy its liabilities in any forms, or when its liabilities are higher than its assets. Classification of financial situation of business entities represents a multidisciplinary scientific issue that uses not only the economic theoretical bases but interacts to the statistical, respectively to econometric approaches as well.The first models of financial distress prediction have originated in the sixties of the 20th century. One of the most known is the Altman’s model followed by a range of others which are constructed on more or less conformable bases. In many existing models it is possible to find common elements which could be marked as elementary indicators of potential financial distress of a company. The objective of this article is, based on the comparison of existing models of prediction of financial distress, to define the set of basic indicators of company’s financial distress at conjoined identification of their critical aspects. The sample defined this way will be a background for future research focused on determination of one-dimensional model of financial distress prediction which would subsequently become a basis for construction of multi-dimensional prediction model.

  20. Durability and life prediction modeling in polyimide composites

    Science.gov (United States)

    Binienda, Wieslaw K.

    1995-01-01

    Sudden appearance of cracks on a macroscopically smooth surface of brittle materials due to cooling or drying shrinkage is a phenomenon related to many engineering problems. Although conventional strength theories can be used to predict the necessary condition for crack appearance, they are unable to predict crack spacing and depth. On the other hand, fracture mechanics theory can only study the behavior of existing cracks. The theory of crack initiation can be summarized into three conditions, which is a combination of a strength criterion and laws of energy conservation, the average crack spacing and depth can thus be determined. The problem of crack initiation from the surface of an elastic half plane is solved and compares quite well with available experimental evidence. The theory of crack initiation is also applied to concrete pavements. The influence of cracking is modeled by the additional compliance according to Okamura's method. The theoretical prediction by this structural mechanics type of model correlates very well with the field observation. The model may serve as a theoretical foundation for future pavement joint design. The initiation of interactive cracks of quasi-brittle material is studied based on a theory of cohesive crack model. These cracks may grow simultaneously, or some of them may close during certain stages. The concept of crack unloading of cohesive crack model is proposed. The critical behavior (crack bifurcation, maximum loads) of the cohesive crack model are characterized by rate equations. The post-critical behavior of crack initiation is also studied.

  1. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  2. Model predictive Controller for Mobile Robot

    OpenAIRE

    Alireza Rezaee

    2017-01-01

    This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...

  3. Critical Comments on the General Model of Instructional Communication

    Science.gov (United States)

    Walton, Justin D.

    2014-01-01

    This essay presents a critical commentary on McCroskey et al.'s (2004) general model of instructional communication. In particular, five points are examined which make explicit and problematize the meta-theoretical assumptions of the model. Comments call attention to the limitations of the model and argue for a broader approach to…

  4. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  5. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  6. ACCEPT: Introduction of the Adverse Condition and Critical Event Prediction Toolbox

    Science.gov (United States)

    Martin, Rodney A.; Santanu, Das; Janakiraman, Vijay Manikandan; Hosein, Stefan

    2015-01-01

    The prediction of anomalies or adverse events is a challenging task, and there are a variety of methods which can be used to address the problem. In this paper, we introduce a generic framework developed in MATLAB (sup registered mark) called ACCEPT (Adverse Condition and Critical Event Prediction Toolbox). ACCEPT is an architectural framework designed to compare and contrast the performance of a variety of machine learning and early warning algorithms, and tests the capability of these algorithms to robustly predict the onset of adverse events in any time-series data generating systems or processes.

  7. Finding Furfural Hydrogenation Catalysts via Predictive Modelling.

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-09-10

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.

  8. Corporate prediction models, ratios or regression analysis?

    NARCIS (Netherlands)

    Bijnen, E.J.; Wijn, M.F.C.M.

    1994-01-01

    The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in

  9. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  10. Predicting the unpredictable: Critical analysis and practical implications of predictive anticipatory activity

    Directory of Open Access Journals (Sweden)

    Julia eMossbridge

    2014-03-01

    Full Text Available A recent meta-analysis of experiments from seven independent laboratories (n=26 published since 1978 indicates that the human body can apparently detect randomly delivered stimuli occurring 1-10 seconds in the future (Mossbridge, Tressoldi, & Utts, 2012. The key observation in these studies is that human physiology appears to be able to distinguish between unpredictable dichotomous future stimuli, such as emotional vs. neutral images or sound vs. silence. This phenomenon has been called presentiment (as in feeling the future. In this paper we call it predictive anticipatory activity or PAA. The phenomenon is predictive because it can distinguish between upcoming stimuli; it is anticipatory because the physiological changes occur before a future event; and it is an activity because it involves changes in the cardiopulmonary, skin, and/or nervous systems. PAA is an unconscious phenomenon that seems to be a time-reversed reflection of the usual physiological response to a stimulus. It appears to resemble precognition (consciously knowing something is going to happen before it does, but PAA specifically refers to unconscious physiological reactions as opposed to conscious premonitions. Though it is possible that PAA underlies the conscious experience of precognition, experiments testing this idea have not produced clear results. The first part of this paper reviews the evidence for PAA and examines the two most difficult challenges for obtaining valid evidence for it: expectation bias and multiple analyses. The second part speculates on possible mechanisms and the theoretical implications of PAA for understanding physiology and consciousness. The third part examines potential practical applications.

  11. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  12. Reduced functional measure of cardiovascular reserve predicts admission to critical care unit following kidney transplantation.

    Directory of Open Access Journals (Sweden)

    Stephen M S Ting

    Full Text Available There is currently no effective preoperative assessment for patients undergoing kidney transplantation that is able to identify those at high perioperative risk requiring admission to critical care unit (CCU. We sought to determine if functional measures of cardiovascular reserve, in particular the anaerobic threshold (VO₂AT could identify these patients.Adult patients were assessed within 4 weeks prior to kidney transplantation in a University hospital with a 37-bed CCU, between April 2010 and June 2012. Cardiopulmonary exercise testing (CPET, echocardiography and arterial applanation tonometry were performed.There were 70 participants (age 41.7±14.5 years, 60% male, 91.4% living donor kidney recipients, 23.4% were desensitized. 14 patients (20% required escalation of care from the ward to CCU following transplantation. Reduced anaerobic threshold (VO₂AT was the most significant predictor, independently (OR = 0.43; 95% CI 0.27-0.68; p<0.001 and in the multivariate logistic regression analysis (adjusted OR = 0.26; 95% CI 0.12-0.59; p = 0.001. The area under the receiver-operating-characteristic curve was 0.93, based on a risk prediction model that incorporated VO₂AT, body mass index and desensitization status. Neither echocardiographic nor measures of aortic compliance were significantly associated with CCU admission.To our knowledge, this is the first prospective observational study to demonstrate the usefulness of CPET as a preoperative risk stratification tool for patients undergoing kidney transplantation. The study suggests that VO₂AT has the potential to predict perioperative morbidity in kidney transplant recipients.

  13. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  14. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  15. Critical percolation in the slow cooling of the bi-dimensional ferromagnetic Ising model

    Science.gov (United States)

    Ricateau, Hugo; Cugliandolo, Leticia F.; Picco, Marco

    2018-01-01

    We study, with numerical methods, the fractal properties of the domain walls found in slow quenches of the kinetic Ising model to its critical temperature. We show that the equilibrium interfaces in the disordered phase have critical percolation fractal dimension over a wide range of length scales. We confirm that the system falls out of equilibrium at a temperature that depends on the cooling rate as predicted by the Kibble-Zurek argument and we prove that the dynamic growing length once the cooling reaches the critical point satisfies the same scaling. We determine the dynamic scaling properties of the interface winding angle variance and we show that the crossover between critical Ising and critical percolation properties is determined by the growing length reached when the system fell out of equilibrium.

  16. A study of critical two-phase flow models

    International Nuclear Information System (INIS)

    Siikonen, T.

    1982-01-01

    The existing computer codes use different boundary conditions in the calculation of critical two-phase flow. In the present study these boundary conditions are compared. It is shown that the boundary condition should be determined from the hydraulic model used in the computer code. The use of a correlation, which is not based on the hydraulic model used, leads often to bad results. Usually a good agreement with data is obtained in the calculation as far as the critical mass flux is concerned, but the agreement is not so good in the pressure profiles. The reason is suggested to be mainly in inadequate modeling of non-equilibrium effects. (orig.)

  17. Wind farm production prediction - The Zephyr model

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)

    2002-06-01

    This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)

  18. Interpreting Disruption Prediction Models to Improve Plasma Control

    Science.gov (United States)

    Parsons, Matthew

    2017-10-01

    In order for the tokamak to be a feasible design for a fusion reactor, it is necessary to minimize damage to the machine caused by plasma disruptions. Accurately predicting disruptions is a critical capability for triggering any mitigative actions, and a modest amount of attention has been given to efforts that employ machine learning techniques to make these predictions. By monitoring diagnostic signals during a discharge, such predictive models look for signs that the plasma is about to disrupt. Typically these predictive models are interpreted simply to give a `yes' or `no' response as to whether a disruption is approaching. However, it is possible to extract further information from these models to indicate which input signals are more strongly correlated with the plasma approaching a disruption. If highly accurate predictive models can be developed, this information could be used in plasma control schemes to make better decisions about disruption avoidance. This work was supported by a Grant from the 2016-2017 Fulbright U.S. Student Program, administered by the Franco-American Fulbright Commission in France.

  19. Model predictive controller design of hydrocracker reactors

    OpenAIRE

    GÖKÇE, Dila

    2011-01-01

    This study summarizes the design of a Model Predictive Controller (MPC) in Tüpraş, İzmit Refinery Hydrocracker Unit Reactors. Hydrocracking process, in which heavy vacuum gasoil is converted into lighter and valuable products at high temperature and pressure is described briefly. Controller design description, identification and modeling studies are examined and the model variables are presented. WABT (Weighted Average Bed Temperature) equalization and conversion increase are simulate...

  20. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  1. Classification and prediction of the critical heat flux using fuzzy theory and artificial neural networks

    International Nuclear Information System (INIS)

    Moon, Sang Ki; Chang, Soon Heung

    1994-01-01

    A new method to predict the critical heat flux (CHF) is proposed, based on the fuzzy clustering and artificial neural network. The fuzzy clustering classifies the experimental CHF data into a few data clusters (data groups) according to the data characteristics. After classification of the experimental data, the characteristics of the resulting clusters are discussed with emphasis on the distribution of the experimental conditions and physical mechanism. The CHF data in each group are trained in an artificial neural network to predict the CHF. The artificial neural network adjusts the weight so as to minimize the prediction error within the corresponding cluster. Application of the proposed method to the KAIST CHF data bank shows good prediction capability of the CHF, better than other existing methods. ((orig.))

  2. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Soft-Cliff Retreat, Self-Organized Critical Phenomena in the Limit of Predictability?

    Science.gov (United States)

    Paredes, Carlos; Godoy, Clara; Castedo, Ricardo

    2015-03-01

    The coastal erosion along the world's coastlines is a natural process that occurs through the actions of marine and subaerial physico-chemical phenomena, waves, tides, and currents. The development of cliff erosion predictive models is limited due to the complex interactions between environmental processes and material properties over a wide range of temporal and spatial scales. As a result of this erosive action, gravity driven mass movements occur and the coastline moves inland. Like other studied earth natural and synthetically modelled phenomena characterized as self-organized critical (SOC), the recession of the cliff has a seemingly random, sporadic behavior, with a wide range of yearly recession rate values probabilistically distributed by a power-law. Usually, SOC systems are defined by a number of scaling features in the size distribution of its parameters and on its spatial and/or temporal pattern. Particularly, some previous studies of derived parameters from slope movements catalogues, have allowed detecting certain SOC features in this phenomenon, which also shares the recession of cliffs. Due to the complexity of the phenomenon and, as for other natural processes, there is no definitive model of recession of coastal cliffs. In this work, various analysis techniques have been applied to identify SOC features in the distribution and pattern to a particular case: the Holderness shoreline. This coast is a great case study to use when examining coastal processes and the structures associated with them. It is one of World's fastest eroding coastlines (2 m/yr in average, max observed 22 m/yr). Cliffs, ranging from 2 m up to 35 m in height, and made up of glacial tills, mainly compose this coast. It is this soft boulder clay that is being rapidly eroded and where coastline recession measurements have been recorded by the Cliff Erosion Monitoring Program (East Riding of Yorkshire Council, UK). The original database has been filtered by grouping contiguous

  4. Higher spin currents in the critical O(N) vector model at 1/N2

    International Nuclear Information System (INIS)

    Manashov, A.N.; Strohmaier, M.

    2017-06-01

    We calculate the anomalous dimensions of higher spin singlet currents in the critical O(N) vector model at order 1/N 2 . The results are shown to be in agreement with the four-loop perturbative computation in φ 4 theory in 4-2ε dimensions. It is known that the order 1/N anomalous dimensions of higher-spin currents happen to be the same in the Gross-Neveu and the critical vector model. On the contrary, the order 1/N 2 corrections are different. The results can also be interpreted as a prediction for the two-loop computation in the dual higher-spin gravity.

  5. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  6. Alcator C-Mod predictive modeling

    International Nuclear Information System (INIS)

    Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas

    2001-01-01

    Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles

  7. A Simple Predictive Method of Critical Flicker Detection for Human Healthy Precaution

    Directory of Open Access Journals (Sweden)

    Goh Zai Peng

    2015-01-01

    Full Text Available Interharmonics and flickers have an interrelationship between each other. Based on International Electrotechnical Commission (IEC flicker standard, the critical flicker frequency for a human eye is located at 8.8 Hz. Additionally, eye strains, headaches, and in the worst case seizures may happen due to the critical flicker. Therefore, this paper introduces a worthwhile research gap on the investigation of interrelationship between the amplitudes of the interharmonics and the critical flicker for 50 Hz power system. Consequently, the significant findings obtained in this paper are the amplitudes of two particular interharmonics are able to detect the critical flicker. In this paper, the aforementioned amplitudes are detected by adaptive linear neuron (ADALINE. After that, the critical flicker is detected by substituting the aforesaid amplitudes to the formulas that have been generated in this paper accordingly. Simulation and experimental works are conducted and the accuracy of the proposed algorithm which utilizes ADALINE is similar, as compared to typical Fluke power analyzer. In a nutshell, this simple predictive method for critical flicker detection has strong potential to be applied in any human crowded places (such as offices, shopping complexes, and stadiums for human healthy precaution purpose due to its simplicity.

  8. Critical heat flux predictions for the Sandia Annular Core Research Reactor

    International Nuclear Information System (INIS)

    Rao, D.V.; El-Genk, M.S.

    1994-08-01

    This study provides best estimate predictions of the Critical Heat Flux (CHF) and the Critical Heat Flux Ratio (CHFR) to support the proposed upgrade of the Annual Core Research Reactor (ACRR) at Sandia National Laboratories (SNL) from its present value of 2 MWt to 4 MWt. These predictions are based on the University of New Mexico (UNM) - CHF correlation, originally developed for uniformly heated vertical annuli. The UNM-CHF correlation is applicable to low-flow and low-pressure conditions, which are typical of those in the ACRR. The three hypotheses that examined the effect of the nonuniform axial heat flux distribution in the ACRR core are (1) the local conditions hypotheses, (2) the total power hypothesis, and (3) the global conditions hypothesis. These hypotheses, in conjunction with the UNM-CHF correlation, are used to estimate the CHF and CHFR in the ACRR. Because the total power hypothesis predictions of power per rod at CHF are approximately 15%-20% lower than those corresponding to saturation exit conditions, it can be concluded that the total power hypothesis considerably underestimates the CHF for nonuniformly heated geometries. This conclusion is in agreement with previous experimental results. The global conditions hypothesis, which is more conservative and more accurate of the other two, provides the most reliable predictions of CHF/CHFR for the ACRR. The global conditions hypothesis predictions of CHFR varied between 2.1 and 3.9, with the higher value corresponding to the lower water inlet temperature of 20 degrees C

  9. A self-organized criticality model for plasma transport

    International Nuclear Information System (INIS)

    Carreras, B.A.; Newman, D.; Lynch, V.E.

    1996-01-01

    Many models of natural phenomena manifest the basic hypothesis of self-organized criticality (SOC). The SOC concept brings together the self-similarity on space and time scales that is common to many of these phenomena. The application of the SOC modelling concept to the plasma dynamics near marginal stability opens new possibilities of understanding issues such as Bohm scaling, profile consistency, broad band fluctuation spectra with universal characteristics and fast time scales. A model realization of self-organized criticality for plasma transport in a magnetic confinement device is presented. The model is based on subcritical resistive pressure-gradient-driven turbulence. Three-dimensional nonlinear calculations based on this model show the existence of transport under subcritical conditions. This model that includes fluctuation dynamics leads to results very similar to the running sandpile paradigm

  10. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  11. The critical boundary RSOS M(3,5) model

    Science.gov (United States)

    El Deeb, O.

    2017-12-01

    We consider the critical nonunitary minimal model M(3, 5) with integrable boundaries and analyze the patterns of zeros of the eigenvalues of the transfer matrix and then determine the spectrum of the critical theory using the thermodynamic Bethe ansatz ( TBA) equations. Solving the TBA functional equation satisfied by the transfer matrices of the associated A 4 restricted solid-on-solid Forrester-Baxter lattice model in regime III in the continuum scaling limit, we derive the integral TBA equations for all excitations in the ( r, s) = (1, 1) sector and then determine their corresponding energies. We classify the excitations in terms of ( m, n) systems.

  12. Predicting turns in proteins with a unified model.

    Directory of Open Access Journals (Sweden)

    Qi Song

    Full Text Available MOTIVATION: Turns are a critical element of the structure of a protein; turns play a crucial role in loops, folds, and interactions. Current prediction methods are well developed for the prediction of individual turn types, including α-turn, β-turn, and γ-turn, etc. However, for further protein structure and function prediction it is necessary to develop a uniform model that can accurately predict all types of turns simultaneously. RESULTS: In this study, we present a novel approach, TurnP, which offers the ability to investigate all the turns in a protein based on a unified model. The main characteristics of TurnP are: (i using newly exploited features of structural evolution information (secondary structure and shape string of protein based on structure homologies, (ii considering all types of turns in a unified model, and (iii practical capability of accurate prediction of all turns simultaneously for a query. TurnP utilizes predicted secondary structures and predicted shape strings, both of which have greater accuracy, based on innovative technologies which were both developed by our group. Then, sequence and structural evolution features, which are profile of sequence, profile of secondary structures and profile of shape strings are generated by sequence and structure alignment. When TurnP was validated on a non-redundant dataset (4,107 entries by five-fold cross-validation, we achieved an accuracy of 88.8% and a sensitivity of 71.8%, which exceeded the most state-of-the-art predictors of certain type of turn. Newly determined sequences, the EVA and CASP9 datasets were used as independent tests and the results we achieved were outstanding for turn predictions and confirmed the good performance of TurnP for practical applications.

  13. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  14. A critical pressure based panel method for prediction of unsteady loading of marine propellers under cavitation

    International Nuclear Information System (INIS)

    Liu, P.; Bose, N.; Colbourne, B.

    2002-01-01

    A simple numerical procedure is established and implemented into a time domain panel method to predict hydrodynamic performance of marine propellers with sheet cavitation. This paper describes the numerical formulations and procedures to construct this integration. Predicted hydrodynamic loads were compared with both a previous numerical model and experimental measurements for a propeller in steady flow. The current method gives a substantial improvement in thrust and torque coefficient prediction over a previous numerical method at low cavitation numbers of less than 2.0, where severe cavitation occurs. Predicted pressure coefficient distributions are also presented. (author)

  15. Critical review of hydraulic modeling on atmospheric heat dissipation

    International Nuclear Information System (INIS)

    Onishi, Y.; Brown, S.M.

    1977-01-01

    Objectives of this study were: to define the useful roles of hydraulic modeling in understanding the predicting atmospheric effects of heat dissipation systems; to assess the state-of-the-art of hydraulic modeling of atmospheric phenomena; to inventory potentially useful existing hydraulic modeling facilities both in the United States and abroad; and to scope hydraulic model studies to assist the assessment of atmospheric effects of nuclear energy centers

  16. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  17. Predictive performance models and multiple task performance

    Science.gov (United States)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  18. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....

  19. A critical review of principal traffic noise models: Strategies and implications

    Energy Technology Data Exchange (ETDEWEB)

    Garg, Naveen, E-mail: ngarg@mail.nplindia.ernet.in [Apex Level Standards and Industrial Metrology Division, CSIR-National Physical Laboratory, New Delhi 110012 (India); Department of Mechanical, Production and Industrial Engineering, Delhi Technological University, Delhi 110042 (India); Maji, Sagar [Department of Mechanical, Production and Industrial Engineering, Delhi Technological University, Delhi 110042 (India)

    2014-04-01

    The paper presents an exhaustive comparison of principal traffic noise models adopted in recent years in developed nations. The comparison is drawn on the basis of technical attributes including source modelling and sound propagation algorithms. Although the characterization of source in terms of rolling and propulsion noise in conjunction with advanced numerical methods for sound propagation has significantly reduced the uncertainty in traffic noise predictions, the approach followed is quite complex and requires specialized mathematical skills for predictions which is sometimes quite cumbersome for town planners. Also, it is sometimes difficult to follow the best approach when a variety of solutions have been proposed. This paper critically reviews all these aspects pertaining to the recent models developed and adapted in some countries and also discusses the strategies followed and implications of these models. - Highlights: • Principal traffic noise models developed are reviewed. • Sound propagation algorithms used in traffic noise models are compared. • Implications of models are discussed.

  20. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  1. Quantum critical scaling of fidelity in BCS-like model

    International Nuclear Information System (INIS)

    Adamski, Mariusz; Jedrzejewski, Janusz; Krokhmalskii, Taras

    2013-01-01

    We study scaling of the ground-state fidelity in neighborhoods of quantum critical points in a model of interacting spinful fermions—a BCS-like model. Due to the exact diagonalizability of the model, in one and higher dimensions, scaling of the ground-state fidelity can be analyzed numerically with great accuracy, not only for small systems but also for macroscopic ones, together with the crossover region between them. Additionally, in the one-dimensional case we have been able to derive a number of analytical formulas for fidelity and show that they accurately fit our numerical results; these results are reported in the paper. Besides regular critical points and their neighborhoods, where well-known scaling laws are obeyed, there is the multicritical point and critical points in its proximity where anomalous scaling behavior is found. We also consider scaling of fidelity in neighborhoods of critical points where fidelity oscillates strongly as the system size or the chemical potential is varied. Our results for a one-dimensional version of a BCS-like model are compared with those obtained recently by Rams and Damski in similar studies of a quantum spin chain—an anisotropic XY model in a transverse magnetic field. (paper)

  2. Formability prediction for AHSS materials using damage models

    Science.gov (United States)

    Amaral, R.; Santos, Abel D.; José, César de Sá; Miranda, Sara

    2017-05-01

    Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches.

  3. Formability prediction for AHSS materials using damage models

    International Nuclear Information System (INIS)

    Amaral, R.; Miranda, Sara; Santos, Abel D.; José, César de Sá

    2017-01-01

    Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches. (paper)

  4. A stepwise model to predict monthly streamflow

    Science.gov (United States)

    Mahmood Al-Juboori, Anas; Guven, Aytac

    2016-12-01

    In this study, a stepwise model empowered with genetic programming is developed to predict the monthly flows of Hurman River in Turkey and Diyalah and Lesser Zab Rivers in Iraq. The model divides the monthly flow data to twelve intervals representing the number of months in a year. The flow of a month, t is considered as a function of the antecedent month's flow (t - 1) and it is predicted by multiplying the antecedent monthly flow by a constant value called K. The optimum value of K is obtained by a stepwise procedure which employs Gene Expression Programming (GEP) and Nonlinear Generalized Reduced Gradient Optimization (NGRGO) as alternative to traditional nonlinear regression technique. The degree of determination and root mean squared error are used to evaluate the performance of the proposed models. The results of the proposed model are compared with the conventional Markovian and Auto Regressive Integrated Moving Average (ARIMA) models based on observed monthly flow data. The comparison results based on five different statistic measures show that the proposed stepwise model performed better than Markovian model and ARIMA model. The R2 values of the proposed model range between 0.81 and 0.92 for the three rivers in this study.

  5. An Adaptive Critic Approach to Reference Model Adaptation

    Science.gov (United States)

    Krishnakumar, K.; Limes, G.; Gundy-Burlet, K.; Bryant, D.

    2003-01-01

    Neural networks have been successfully used for implementing control architectures for different applications. In this work, we examine a neural network augmented adaptive critic as a Level 2 intelligent controller for a C- 17 aircraft. This intelligent control architecture utilizes an adaptive critic to tune the parameters of a reference model, which is then used to define the angular rate command for a Level 1 intelligent controller. The present architecture is implemented on a high-fidelity non-linear model of a C-17 aircraft. The goal of this research is to improve the performance of the C-17 under degraded conditions such as control failures and battle damage. Pilot ratings using a motion based simulation facility are included in this paper. The benefits of using an adaptive critic are documented using time response comparisons for severe damage situations.

  6. Predictive Control, Competitive Model Business Planning, and Innovation ERP

    DEFF Research Database (Denmark)

    Nourani, Cyrus F.; Lauth, Codrina

    2015-01-01

    is not viewed as the sum of its component elements, but the product of their interactions. The paper starts with introducing a systems approach to business modeling. A competitive business modeling technique, based on the author's planning techniques is applied. Systemic decisions are based on common......New optimality principles are put forth based on competitive model business planning. A Generalized MinMax local optimum dynamic programming algorithm is presented and applied to business model computing where predictive techniques can determine local optima. Based on a systems model an enterprise...... organizational goals, and as such business planning and resource assignments should strive to satisfy higher organizational goals. It is critical to understand how different decisions affect and influence one another. Here, a business planning example is presented where systems thinking technique, using Causal...

  7. A formal approach for the prediction of the critical heat flux in subcooled water

    Energy Technology Data Exchange (ETDEWEB)

    Lombardi, C. [Polytechnic of Milan (Italy)

    1995-09-01

    The critical heat flux (CHF) in subcooled water at high mass fluxes are not yet satisfactory correlated. For this scope a formal approach is here followed, which is based on an extension of the parameters and the correlation used for the dryout prediction for medium high quality mixtures. The obtained correlation, in spite of its simplicity and its explicit form, yields satisfactory predictions, also when applied to more conventional CHF data at low-medium mass fluxes and high pressures. Further improvements are possible, if a more complete data bank will be available. The main and general open item is the definition of a criterion, depending only on independent parameters, such as mass flux, pressure, inlet subcooling and geometry, to predict whether the heat transfer crisis will result as a DNB or a dryout phenomenon.

  8. A formal approach for the prediction of the critical heat flux in subcooled water

    International Nuclear Information System (INIS)

    Lombardi, C.

    1995-01-01

    The critical heat flux (CHF) in subcooled water at high mass fluxes are not yet satisfactory correlated. For this scope a formal approach is here followed, which is based on an extension of the parameters and the correlation used for the dryout prediction for medium high quality mixtures. The obtained correlation, in spite of its simplicity and its explicit form, yields satisfactory predictions, also when applied to more conventional CHF data at low-medium mass fluxes and high pressures. Further improvements are possible, if a more complete data bank will be available. The main and general open item is the definition of a criterion, depending only on independent parameters, such as mass flux, pressure, inlet subcooling and geometry, to predict whether the heat transfer crisis will result as a DNB or a dryout phenomenon

  9. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  10. relevance of information warfare models to critical infrastructure

    African Journals Online (AJOL)

    ismith

    Critical infrastructure models, strategies and policies should take information ... gain an advantage over a competitor or adversary through the use of one's own .... digital communications system, where the vehicles are analogous to bits or packets, ..... performance degraded, causing an increase in traffic finding a new route.

  11. Interpretive and Critical Phenomenological Crime Studies: A Model Design

    Science.gov (United States)

    Miner-Romanoff, Karen

    2012-01-01

    The critical and interpretive phenomenological approach is underutilized in the study of crime. This commentary describes this approach, guided by the question, "Why are interpretive phenomenological methods appropriate for qualitative research in criminology?" Therefore, the purpose of this paper is to describe a model of the interpretive…

  12. A Model for Teaching Critical Thinking through Online Searching.

    Science.gov (United States)

    Crane, Beverley; Markowitz, Nancy Lourie

    1994-01-01

    Presents a model that uses online searching to teach critical thinking skills in elementary and secondary education based on Bloom's taxonomy. Three levels of activity are described: analyzing a search statement; defining and clarifying a problem; and focusing an information need. (Contains 13 references.) (LRW)

  13. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  14. An Intelligent Model for Stock Market Prediction

    Directory of Open Access Journals (Sweden)

    IbrahimM. Hamed

    2012-08-01

    Full Text Available This paper presents an intelligent model for stock market signal prediction using Multi-Layer Perceptron (MLP Artificial Neural Networks (ANN. Blind source separation technique, from signal processing, is integrated with the learning phase of the constructed baseline MLP ANN to overcome the problems of prediction accuracy and lack of generalization. Kullback Leibler Divergence (KLD is used, as a learning algorithm, because it converges fast and provides generalization in the learning mechanism. Both accuracy and efficiency of the proposed model were confirmed through the Microsoft stock, from wall-street market, and various data sets, from different sectors of the Egyptian stock market. In addition, sensitivity analysis was conducted on the various parameters of the model to ensure the coverage of the generalization issue. Finally, statistical significance was examined using ANOVA test.

  15. Efficient model learning methods for actor-critic control.

    Science.gov (United States)

    Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik

    2012-06-01

    We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.

  16. Predictive Models, How good are they?

    DEFF Research Database (Denmark)

    Kasch, Helge

    The WAD grading system has been used for more than 20 years by now. It has shown long-term viability, but with strengths and limitations. New bio-psychosocial assessment of the acute whiplash injured subject may provide better prediction of long-term disability and pain. Furthermore, the emerging......-up. It is important to obtain prospective identification of the relevant risk underreported disability could, if we were able to expose these hidden “risk-factors” during our consultations, provide us with better predictive models. New data from large clinical studies will present exciting new genetic risk markers...

  17. NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    SILVA R. G.

    1999-01-01

    Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.

  18. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  19. Critical Business Requirements Model and Metrics for Intranet ROI

    OpenAIRE

    Luqi; Jacoby, Grant A.

    2005-01-01

    Journal of Electronic Commerce Research, Vol. 6, No. 1, pp. 1-30. This research provides the first theoretical model, the Intranet Efficiency and Effectiveness Model (IEEM), to measure intranet overall value contributions based on a corporation’s critical business requirements by applying a balanced baseline of metrics and conversion ratios linked to key business processes of knowledge workers, IT managers and business decision makers -- in effect, closing the gap of understanding...

  20. A 3-D CFD approach to the mechanistic prediction of forced convective critical heat flux at low quality

    International Nuclear Information System (INIS)

    Jean-Marie Le Corre; Cristina H Amon; Shi-Chune Yao

    2005-01-01

    Full text of publication follows: The prediction of the Critical Heat Flux (CHF) in a heat flux controlled boiling heat exchanger is important to assess the maximal thermal capability of the system. In the case of a nuclear reactor, CHF margin gain (using improved mixing vane grid design, for instance) can allow power up-rate and enhanced operating flexibility. In general, current nuclear core design procedures use quasi-1D approach to model the coolant thermal-hydraulic conditions within the fuel bundles coupled with fully empirical CHF prediction methods. In addition, several CHF mechanistic models have been developed in the past and coupled with 1D and quasi-1D thermal-hydraulic codes. These mechanistic models have demonstrated reasonable CHF prediction characteristics and, more remarkably, correct parametric trends over wide range of fluid conditions. However, since the phenomena leading to CHF are localized near the heater, models are needed to relate local quantities of interest to area-averaged quantities. As a consequence, large CHF prediction uncertainties may be introduced and 3D fluid characteristics (such as swirling flow) cannot be accounted properly. Therefore, a fully mechanistic approach to CHF prediction is, in general, not possible using the current approach. The development of CHF-enhanced fuel assembly designs requires the use of more advanced 3D coolant properties computations coupled with a CHF mechanistic modeling. In the present work, the commercial CFD code CFX-5 is used to compute 3D coolant conditions in a vertical heated tube with upward flow. Several CHF mechanistic models at low quality available in the literature are coupled with the CFD code by developing adequate models between local coolant properties and local parameters of interest to predict CHF. The prediction performances of these models are assessed using CHF databases available in the open literature and the 1995 CHF look-up table. Since CFD can reasonably capture 3D fluid

  1. An improved mechanistic critical heat flux model for subcooled flow boiling

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Chang, Soon Heung [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    Based on the bubble coalescence adjacent to the heated wall as a flow structure for CHF condition, Chang and Lee developed a mechanistic critical heat flux (CHF) model for subcooled flow boiling. In this paper, improvements of Chang-Lee model are implemented with more solid theoretical bases for subcooled and low-quality flow boiling in tubes. Nedderman-Shearer`s equations for the skin friction factor and universal velocity profile models are employed. Slip effect of movable bubbly layer is implemented to improve the predictability of low mass flow. Also, mechanistic subcooled flow boiling model is used to predict the flow quality and void fraction. The performance of the present model is verified using the KAIST CHF database of water in uniformly heated tubes. It is found that the present model can give a satisfactory agreement with experimental data within less than 9% RMS error. 9 refs., 5 figs. (Author)

  2. An improved mechanistic critical heat flux model for subcooled flow boiling

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Chang, Soon Heung [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    Based on the bubble coalescence adjacent to the heated wall as a flow structure for CHF condition, Chang and Lee developed a mechanistic critical heat flux (CHF) model for subcooled flow boiling. In this paper, improvements of Chang-Lee model are implemented with more solid theoretical bases for subcooled and low-quality flow boiling in tubes. Nedderman-Shearer`s equations for the skin friction factor and universal velocity profile models are employed. Slip effect of movable bubbly layer is implemented to improve the predictability of low mass flow. Also, mechanistic subcooled flow boiling model is used to predict the flow quality and void fraction. The performance of the present model is verified using the KAIST CHF database of water in uniformly heated tubes. It is found that the present model can give a satisfactory agreement with experimental data within less than 9% RMS error. 9 refs., 5 figs. (Author)

  3. Self-organized Criticality Model for Ocean Internal Waves

    International Nuclear Information System (INIS)

    Wang Gang; Hou Yijun; Lin Min; Qiao Fangli

    2009-01-01

    In this paper, we present a simple spring-block model for ocean internal waves based on the self-organized criticality (SOC). The oscillations of the water blocks in the model display power-law behavior with an exponent of -2 in the frequency domain, which is similar to the current and sea water temperature spectra in the actual ocean and the universal Garrett and Munk deep ocean internal wave model [Geophysical Fluid Dynamics 2 (1972) 225; J. Geophys. Res. 80 (1975) 291]. The influence of the ratio of the driving force to the spring coefficient to SOC behaviors in the model is also discussed. (general)

  4. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    Science.gov (United States)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  5. Current algebra of WZNW models at and away from criticality

    International Nuclear Information System (INIS)

    Abdalla, E.; Forger, M.

    1992-01-01

    In this paper, the authors derive the current algebra of principal chiral models with a Wess-Zumino term. At the critical coupling where the model becomes conformally invariant (Wess-Zumino-Novikov-Witten theory), this algebra reduces to two commuting Kac-Moody algebras, while in the limit where the coupling constant is taken to zero (ordinary chiral model), we recover the current algebra of that model. In this way, the latter is explicitly realized as a deformation of the former, with the coupling constant as the deformation parameter

  6. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to

  7. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  8. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  9. Critical behavior of the Schwinger model with Wilson fermions

    International Nuclear Information System (INIS)

    Azcoiti, V.; Laliena, V.

    1995-09-01

    A detailed analysis, in the framework of the MFA approach, of the critical behaviour of the lattice Schwinger model with Wilson fermions on lattices up to 24 2 , through the study of the Lee-Yang zeros and the specific heat, is presented. Compelling evidence is found for a critical line ending at k= 0.25 at large β. Finite size scaling analysis on lattices 8 2 , 12 2 , 16 2 , 20 2 and 24 2 indicates a continuous transition. The hyper scaling relation is verified in the explored β region

  10. Database and prediction model for CANDU pressure tube diameter

    Energy Technology Data Exchange (ETDEWEB)

    Jung, J.Y.; Park, J.H. [Korea Atomic Energy Research Inst., Daejeon (Korea, Republic of)

    2014-07-01

    The pressure tube (PT) diameter is basic data in evaluating the CCP (critical channel power) of a CANDU reactor. Since the CCP affects the operational margin directly, an accurate prediction of the PT diameter is important to assess the operational margin. However, the PT diameter increases by creep owing to the effects of irradiation by neutron flux, stress, and reactor operating temperatures during the plant service period. Thus, it has been necessary to collect the measured data of the PT diameter and establish a database (DB) and develop a prediction model of PT diameter. Accordingly, in this study, a DB for the measured PT diameter data was established and a neural network (NN) based diameter prediction model was developed. The established DB included not only the measured diameter data but also operating conditions such as the temperature, pressure, flux, and effective full power date. The currently developed NN based diameter prediction model considers only extrinsic variables such as the operating conditions, and will be enhanced to consider the effect of intrinsic variables such as the micro-structure of the PT material. (author)

  11. Critical heat flux prediction by using radial basis function and multilayer perceptron neural networks: A comparison study

    International Nuclear Information System (INIS)

    Vaziri, Nima; Hojabri, Alireza; Erfani, Ali; Monsefi, Mehrdad; Nilforooshan, Behnam

    2007-01-01

    Critical heat flux (CHF) is an important parameter for the design of nuclear reactors. Although many experimental and theoretical researches have been performed, there is not a single correlation to predict CHF because it is influenced by many parameters. These parameters are based on fixed inlet, local and fixed outlet conditions. Artificial neural networks (ANNs) have been applied to a wide variety of different areas such as prediction, approximation, modeling and classification. In this study, two types of neural networks, radial basis function (RBF) and multilayer perceptron (MLP), are trained with the experimental CHF data and their performances are compared. RBF predicts CHF with root mean square (RMS) errors of 0.24%, 7.9%, 0.16% and MLP predicts CHF with RMS errors of 1.29%, 8.31% and 2.71%, in fixed inlet conditions, local conditions and fixed outlet conditions, respectively. The results show that neural networks with RBF structure have superior performance in CHF data prediction over MLP neural networks. The parametric trends of CHF obtained by the trained ANNs are also evaluated and results reported

  12. SLE in self-dual critical Z(N) spin systems: CFT predictions

    International Nuclear Information System (INIS)

    Santachiara, Raoul

    2008-01-01

    The Schramm-Loewner evolution (SLE) describes the continuum limit of domain walls at phase transitions in two-dimensional statistical systems. We consider here the SLE in Z(N) spin models at their self-dual critical point. For N=2 and N=3 these models correspond to the Ising and three-state Potts model. For N≥4 the critical self-dual Z(N) spin models are described in the continuum limit by non-minimal conformal field theories with central charge c≥1. By studying the representations of the corresponding chiral algebra, we show that two particular operators satisfy a two level null vector condition which, for N≥4, presents an additional term coming from the extra symmetry currents action. For N=2,3 these operators correspond to the boundary conditions changing operators associated to the SLE 16/3 (Ising model) and to the SLE 24/5 and SLE 10/3 (three-state Potts model). We suggest a definition of the interfaces within the Z(N) lattice models. The scaling limit of these interfaces is expected to be described at the self-dual critical point and for N≥4 by the SLE 4(N+1)/(N+2) and SLE 4(N+2)/(N+1) processes

  13. General correlation for prediction of critical heat flux ratio in water cooled channels

    Energy Technology Data Exchange (ETDEWEB)

    Pernica, R.; Cizek, J.

    1995-09-01

    The paper present the general empirical Critical Heat Flux Ration (CHFR) correlation which is valid for vertical water upflow through tubes, internally heated concentric annuli and rod bundles geometries with both wide and very tight square and triangular rods lattices. The proposed general PG correlation directly predicts the CHFR, it comprises axial and radial non-uniform heating, and is valid in a wider range of thermal hydraulic conditions than previously published critical heat flux correlations. The PG correlation has been developed using the critical heat flux Czech data bank which includes more than 9500 experimental data on tubes, 7600 data on rod bundles and 713 data on internally heated concentric annuli. Accuracy of the CHFR prediction, statistically assessed by the constant dryout conditions approach, is characterized by the mean value nearing 1.00 and the standard deviation less than 0.06. Moverover, a subchannel form of the PG correlations is statistically verified on Westinghouse and Combustion Engineering rod bundle data bases, i.e. more than 7000 experimental CHF points of Columbia University data bank were used.

  14. prediction of shear resistance factor in flat slabs design using critical

    African Journals Online (AJOL)

    user

    The provisions of the American, Canadian, European and. Model codes, regarding the ... is the applied shear stress. W1 .... perimeter implies a smaller stress while a smaller critical .... should be compared with values provided in this work to validate ... [1] American Concrete Institute : Building code requirement for structural ...

  15. Predictive Models for Carcinogenicity and Mutagenicity ...

    Science.gov (United States)

    Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t

  16. The critical domain size of stochastic population models.

    Science.gov (United States)

    Reimer, Jody R; Bonsall, Michael B; Maini, Philip K

    2017-02-01

    Identifying the critical domain size necessary for a population to persist is an important question in ecology. Both demographic and environmental stochasticity impact a population's ability to persist. Here we explore ways of including this variability. We study populations with distinct dispersal and sedentary stages, which have traditionally been modelled using a deterministic integrodifference equation (IDE) framework. Individual-based models (IBMs) are the most intuitive stochastic analogues to IDEs but yield few analytic insights. We explore two alternate approaches; one is a scaling up to the population level using the Central Limit Theorem, and the other a variation on both Galton-Watson branching processes and branching processes in random environments. These branching process models closely approximate the IBM and yield insight into the factors determining the critical domain size for a given population subject to stochasticity.

  17. New relation for critical exponents in the Ising model

    International Nuclear Information System (INIS)

    Pishtshev, A.

    2007-01-01

    The Ising model in a transverse field is considered at T=0. From the analysis of the power low behaviors of the energy gap and the order parameter as functions of the field a new relation between the respective critical exponents, β>=1/(8s 2 ), is derived. By using the Suzuki equivalence from this inequality a new relation for critical exponents in the Ising model, β>=1/(8ν 2 ), is obtained. A number of numerical examples for different cases illustrates the generality and validity of the relation. By applying this relation the estimation ν=(1/4) 1/3 ∼0.62996 for the 3D-Ising model is proposed

  18. Prediction models in in vitro fertilization; where are we? A mini review

    Directory of Open Access Journals (Sweden)

    Laura van Loendersloot

    2014-05-01

    Full Text Available Since the introduction of in vitro fertilization (IVF in 1978, over five million babies have been born worldwide using IVF. Contrary to the perception of many, IVF does not guarantee success. Almost 50% of couples that start IVF will remain childless, even if they undergo multiple IVF cycles. The decision to start or pursue with IVF is challenging due to the high cost, the burden of the treatment, and the uncertain outcome. In optimal counseling on chances of a pregnancy with IVF, prediction models may play a role, since doctors are not able to correctly predict pregnancy chances. There are three phases of prediction model development: model derivation, model validation, and impact analysis. This review provides an overview on predictive factors in IVF, the available prediction models in IVF and provides key principles that can be used to critically appraise the literature on prediction models in IVF. We will address these points by the three phases of model development.

  19. Modeling of criticality accidents and their environmental consequences

    International Nuclear Information System (INIS)

    Thomas, W.; Gmal, B.

    1987-01-01

    In the Federal Republic of Germany, potential radiological consequences of accidental nuclear criticality have to be evaluated in the licensing procedure for fuel cycle facilities. A prerequisite to this evaluation is to establish conceivable accident scenarios. First, possibilities for a criticality exceeding the generally applied double contingency principle of safety are identified by screening the equipment and operation of the facility. Identification of undetected accumulations of fissile material or incorrect transfer of fissile solution to unfavorable geometry normally are most important. Second, relevant and credible scenarios causing the most severe consequences are derived from these possibilities. For the identified relevant scenarios, time-dependent fission rates and reasonable numbers for peak power and total fissions must be determined. Experience from real accidents and experiments (KEWB, SPERT, CRAC, SILENE) has been evaluated using empirical formulas. To model the time-dependent behavior of criticality excursions in fissile solutions, a computer program FELIX has been developed

  20. Validated predictive modelling of the environmental resistome.

    Science.gov (United States)

    Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome.

  1. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  2. Baryogenesis model predicting antimatter in the Universe

    International Nuclear Information System (INIS)

    Kirilova, D.

    2003-01-01

    Cosmic ray and gamma-ray data do not rule out antimatter domains in the Universe, separated at distances bigger than 10 Mpc from us. Hence, it is interesting to analyze the possible generation of vast antimatter structures during the early Universe evolution. We discuss a SUSY-condensate baryogenesis model, predicting large separated regions of matter and antimatter. The model provides generation of the small locally observed baryon asymmetry for a natural initial conditions, it predicts vast antimatter domains, separated from the matter ones by baryonically empty voids. The characteristic scale of antimatter regions and their distance from the matter ones is in accordance with observational constraints from cosmic ray, gamma-ray and cosmic microwave background anisotropy data

  3. Computational neurorehabilitation: modeling plasticity and learning to predict recovery.

    Science.gov (United States)

    Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas

    2016-04-30

    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.

  4. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    Science.gov (United States)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-01-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  5. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    Science.gov (United States)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-05-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  6. Predicting field weed emergence with empirical models and soft computing techniques

    Science.gov (United States)

    Seedling emergence is the most important phenological process that influences the success of weed species; therefore, predicting weed emergence timing plays a critical role in scheduling weed management measures. Important efforts have been made in the attempt to develop models to predict seedling e...

  7. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    OpenAIRE

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre t...

  8. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  9. Tectonic predictions with mantle convection models

    Science.gov (United States)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough

  10. Breast cancer risks and risk prediction models.

    Science.gov (United States)

    Engel, Christoph; Fischer, Christine

    2015-02-01

    BRCA1/2 mutation carriers have a considerably increased risk to develop breast and ovarian cancer. The personalized clinical management of carriers and other at-risk individuals depends on precise knowledge of the cancer risks. In this report, we give an overview of the present literature on empirical cancer risks, and we describe risk prediction models that are currently used for individual risk assessment in clinical practice. Cancer risks show large variability between studies. Breast cancer risks are at 40-87% for BRCA1 mutation carriers and 18-88% for BRCA2 mutation carriers. For ovarian cancer, the risk estimates are in the range of 22-65% for BRCA1 and 10-35% for BRCA2. The contralateral breast cancer risk is high (10-year risk after first cancer 27% for BRCA1 and 19% for BRCA2). Risk prediction models have been proposed to provide more individualized risk prediction, using additional knowledge on family history, mode of inheritance of major genes, and other genetic and non-genetic risk factors. User-friendly software tools have been developed that serve as basis for decision-making in family counseling units. In conclusion, further assessment of cancer risks and model validation is needed, ideally based on prospective cohort studies. To obtain such data, clinical management of carriers and other at-risk individuals should always be accompanied by standardized scientific documentation.

  11. Two-phase flow model with nonequilibrium and critical flow

    International Nuclear Information System (INIS)

    Sureau, H.; Houdayer, G.

    1976-01-01

    The model proposed includes the three conservation equations (mass, momentum, energy) applied to the two phase flows and a fourth partial derivative equation which takes into account the nonequilibriums and describes the mass transfer process. With this model, the two phase critical flow tests performed on the Moby-Dick loop (CENG) with several geometries, are interpreted by a unique law. Extrapolations to industrial dimension problems show that geometry and size effects are different from those obtained with earlier models (Zaloudek, Moody, Fauske) [fr

  12. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  13. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2018-01-01

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Establishing Decision Trees for Predicting Successful Postpyloric Nasoenteric Tube Placement in Critically Ill Patients.

    Science.gov (United States)

    Chen, Weisheng; Sun, Cheng; Wei, Ru; Zhang, Yanlin; Ye, Heng; Chi, Ruibin; Zhang, Yichen; Hu, Bei; Lv, Bo; Chen, Lifang; Zhang, Xiunong; Lan, Huilan; Chen, Chunbo

    2018-01-01

    Despite the use of prokinetic agents, the overall success rate for postpyloric placement via a self-propelled spiral nasoenteric tube is quite low. This retrospective study was conducted in the intensive care units of 11 university hospitals from 2006 to 2016 among adult patients who underwent self-propelled spiral nasoenteric tube insertion. Success was defined as postpyloric nasoenteric tube placement confirmed by abdominal x-ray scan 24 hours after tube insertion. Chi-square automatic interaction detection (CHAID), simple classification and regression trees (SimpleCart), and J48 methodologies were used to develop decision tree models, and multiple logistic regression (LR) methodology was used to develop an LR model for predicting successful postpyloric nasoenteric tube placement. The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance of these models. Successful postpyloric nasoenteric tube placement was confirmed in 427 of 939 patients enrolled. For predicting successful postpyloric nasoenteric tube placement, the performance of the 3 decision trees was similar in terms of the AUCs: 0.715 for the CHAID model, 0.682 for the SimpleCart model, and 0.671 for the J48 model. The AUC of the LR model was 0.729, which outperformed the J48 model. Both the CHAID and LR models achieved an acceptable discrimination for predicting successful postpyloric nasoenteric tube placement and were useful for intensivists in the setting of self-propelled spiral nasoenteric tube insertion. © 2016 American Society for Parenteral and Enteral Nutrition.

  15. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert F.; Knox, James C.

    2016-01-01

    As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  16. Critical behavior in a stochastic model of vector mediated epidemics

    Science.gov (United States)

    Alfinito, E.; Beccaria, M.; Macorini, G.

    2016-06-01

    The extreme vulnerability of humans to new and old pathogens is constantly highlighted by unbound outbreaks of epidemics. This vulnerability is both direct, producing illness in humans (dengue, malaria), and also indirect, affecting its supplies (bird and swine flu, Pierce disease, and olive quick decline syndrome). In most cases, the pathogens responsible for an illness spread through vectors. In general, disease evolution may be an uncontrollable propagation or a transient outbreak with limited diffusion. This depends on the physiological parameters of hosts and vectors (susceptibility to the illness, virulence, chronicity of the disease, lifetime of the vectors, etc.). In this perspective and with these motivations, we analyzed a stochastic lattice model able to capture the critical behavior of such epidemics over a limited time horizon and with a finite amount of resources. The model exhibits a critical line of transition that separates spreading and non-spreading phases. The critical line is studied with new analytical methods and direct simulations. Critical exponents are found to be the same as those of dynamical percolation.

  17. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    Science.gov (United States)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  18. Data Driven Economic Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Masoud Kheradmandi

    2018-04-01

    Full Text Available This manuscript addresses the problem of data driven model based economic model predictive control (MPC design. To this end, first, a data-driven Lyapunov-based MPC is designed, and shown to be capable of stabilizing a system at an unstable equilibrium point. The data driven Lyapunov-based MPC utilizes a linear time invariant (LTI model cognizant of the fact that the training data, owing to the unstable nature of the equilibrium point, has to be obtained from closed-loop operation or experiments. Simulation results are first presented demonstrating closed-loop stability under the proposed data-driven Lyapunov-based MPC. The underlying data-driven model is then utilized as the basis to design an economic MPC. The economic improvements yielded by the proposed method are illustrated through simulations on a nonlinear chemical process system example.

  19. A Comprehensive Assessment Model for Critical Infrastructure Protection

    Directory of Open Access Journals (Sweden)

    Häyhtiö Markus

    2017-12-01

    Full Text Available International business demands seamless service and IT-infrastructure throughout the entire supply chain. However, dependencies between different parts of this vulnerable ecosystem form a fragile web. Assessment of the financial effects of any abnormalities in any part of the network is demanded in order to protect this network in a financially viable way. Contractual environment between the actors in a supply chain, different business domains and functions requires a management model, which enables a network wide protection for critical infrastructure. In this paper authors introduce such a model. It can be used to assess financial differences between centralized and decentralized protection of critical infrastructure. As an end result of this assessment business resilience to unknown threats can be improved across the entire supply chain.

  20. Critical exponents for the Reggeon quantum spin model

    International Nuclear Information System (INIS)

    Brower, R.C.; Furman, M.A.

    1978-01-01

    The Reggeon quantum spin (RQS) model on the transverse lattice in D dimensional impact parameter space has been conjectured to have the same critical behaviour as the Reggeon field theory (RFT). Thus from a high 'temperature' series of ten (D=2) and twenty (D=1) terms for the RQS model the authors extrapolate to the critical temperature T=Tsub(c) by Pade approximants to obtain the exponents eta=0.238 +- 0.008, z=1.16 +- 0.01, γ=1.271 +- 0.007 for D=2 and eta=0.317 +- 0.002, z=1.272 +- 0.007, γ=1.736 +- 0.001, lambda=0.57 +- 0.03 for D=1. These exponents naturally interpolate between the D=0 and D=4-epsilon results for RFT as expected on the basis of the universality conjecture. (Auth.)

  1. From Safety Critical Java Programs to Timed Process Models

    DEFF Research Database (Denmark)

    Thomsen, Bent; Luckow, Kasper Søe; Thomsen, Lone Leth

    2015-01-01

    frameworks, we have in recent years pursued an agenda of translating hard-real-time embedded safety critical programs written in the Safety Critical Java Profile [33] into networks of timed automata [4] and subjecting those to automated analysis using the UPPAAL model checker [10]. Several tools have been...... built and the tools have been used to analyse a number of systems for properties such as worst case execution time, schedulability and energy optimization [12–14,19,34,36,38]. In this paper we will elaborate on the theoretical underpinning of the translation from Java programs to timed automata models...... and briefly summarize some of the results based on this translation. Furthermore, we discuss future work, especially relations to the work in [16,24] as Java recently has adopted first class higher order functions in the form of lambda abstractions....

  2. Plant control using embedded predictive models

    International Nuclear Information System (INIS)

    Godbole, S.S.; Gabler, W.E.; Eschbach, S.L.

    1990-01-01

    B and W recently undertook the design of an advanced light water reactor control system. A concept new to nuclear steam system (NSS) control was developed. The concept, which is called the Predictor-Corrector, uses mathematical models of portions of the controlled NSS to calculate, at various levels within the system, demand and control element position signals necessary to satisfy electrical demand. The models give the control system the ability to reduce overcooling and undercooling of the reactor coolant system during transients and upsets. Two types of mathematical models were developed for use in designing and testing the control system. One model was a conventional, comprehensive NSS model that responds to control system outputs and calculates the resultant changes in plant variables that are then used as inputs to the control system. Two other models, embedded in the control system, were less conventional, inverse models. These models accept as inputs plant variables, equipment states, and demand signals and predict plant operating conditions and control element states that will satisfy the demands. This paper reports preliminary results of closed-loop Reactor Coolant (RC) pump trip and normal load reduction testing of the advanced concept. Results of additional transient testing, and of open and closed loop stability analyses will be reported as they are available

  3. The IntFOLD server: an integrated web resource for protein fold recognition, 3D model quality assessment, intrinsic disorder prediction, domain prediction and ligand binding site prediction.

    Science.gov (United States)

    Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J

    2011-07-01

    The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.

  4. Prediction of critical heat flux in fuel assemblies using a CHF table method

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Tae Hyun; Hwang, Dae Hyun; Bang, Je Geon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Baek, Won Pil; Chang, Soon Heung [Korea Advance Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    A CHF table method has been assessed in this study for rod bundle CHF predictions. At the conceptual design stage for a new reactor, a general critical heat flux (CHF) prediction method with a wide applicable range and reasonable accuracy is essential to the thermal-hydraulic design and safety analysis. In many aspects, a CHF table method (i.e., the use of a round tube CHF table with appropriate bundle correction factors) can be a promising way to fulfill this need. So the assessment of the CHF table method has been performed with the bundle CHF data relevant to pressurized water reactors (PWRs). For comparison purposes, W-3R and EPRI-1 were also applied to the same data base. Data analysis has been conducted with the subchannel code COBRA-IV-I. The CHF table method shows the best predictions based on the direct substitution method. Improvements of the bundle correction factors, especially for the spacer grid and cold wall effects, are desirable for better predictions. Though the present assessment is somewhat limited in both fuel geometries and operating conditions, the CHF table method clearly shows potential to be a general CHF predictor. 8 refs., 3 figs., 3 tabs. (Author)

  5. Prediction of critical heat flux in fuel assemblies using a CHF table method

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Tae Hyun; Hwang, Dae Hyun; Bang, Je Geon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Baek, Won Pil; Chang, Soon Heung [Korea Advance Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    A CHF table method has been assessed in this study for rod bundle CHF predictions. At the conceptual design stage for a new reactor, a general critical heat flux (CHF) prediction method with a wide applicable range and reasonable accuracy is essential to the thermal-hydraulic design and safety analysis. In many aspects, a CHF table method (i.e., the use of a round tube CHF table with appropriate bundle correction factors) can be a promising way to fulfill this need. So the assessment of the CHF table method has been performed with the bundle CHF data relevant to pressurized water reactors (PWRs). For comparison purposes, W-3R and EPRI-1 were also applied to the same data base. Data analysis has been conducted with the subchannel code COBRA-IV-I. The CHF table method shows the best predictions based on the direct substitution method. Improvements of the bundle correction factors, especially for the spacer grid and cold wall effects, are desirable for better predictions. Though the present assessment is somewhat limited in both fuel geometries and operating conditions, the CHF table method clearly shows potential to be a general CHF predictor. 8 refs., 3 figs., 3 tabs. (Author)

  6. Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  7. Modeling and Prediction of Krueger Device Noise

    Science.gov (United States)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  8. Evaluating Predictive Models of Software Quality

    Science.gov (United States)

    Ciaschini, V.; Canaparo, M.; Ronchieri, E.; Salomoni, D.

    2014-06-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  9. Predicting FLDs Using a Multiscale Modeling Scheme

    Science.gov (United States)

    Wu, Z.; Loy, C.; Wang, E.; Hegadekatte, V.

    2017-09-01

    The measurement of a single forming limit diagram (FLD) requires significant resources and is time consuming. We have developed a multiscale modeling scheme to predict FLDs using a combination of limited laboratory testing, crystal plasticity (VPSC) modeling, and dual sequential-stage finite element (ABAQUS/Explicit) modeling with the Marciniak-Kuczynski (M-K) criterion to determine the limit strain. We have established a means to work around existing limitations in ABAQUS/Explicit by using an anisotropic yield locus (e.g., BBC2008) in combination with the M-K criterion. We further apply a VPSC model to reduce the number of laboratory tests required to characterize the anisotropic yield locus. In the present work, we show that the predicted FLD is in excellent agreement with the measured FLD for AA5182 in the O temper. Instead of 13 different tests as for a traditional FLD determination within Novelis, our technique uses just four measurements: tensile properties in three orientations; plane strain tension; biaxial bulge; and the sheet crystallographic texture. The turnaround time is consequently far less than for the traditional laboratory measurement of the FLD.

  10. PREDICTION MODELS OF GRAIN YIELD AND CHARACTERIZATION

    Directory of Open Access Journals (Sweden)

    Narciso Ysac Avila Serrano

    2009-06-01

    Full Text Available With the objective to characterize the grain yield of five cowpea cultivars and to find linear regression models to predict it, a study was developed in La Paz, Baja California Sur, Mexico. A complete randomized blocks design was used. Simple and multivariate analyses of variance were carried out using the canonical variables to characterize the cultivars. The variables cluster per plant, pods per plant, pods per cluster, seeds weight per plant, seeds hectoliter weight, 100-seed weight, seeds length, seeds wide, seeds thickness, pods length, pods wide, pods weight, seeds per pods, and seeds weight per pods, showed significant differences (P≤ 0.05 among cultivars. Paceño and IT90K-277-2 cultivars showed the higher seeds weight per plant. The linear regression models showed correlation coefficients ≥0.92. In these models, the seeds weight per plant, pods per cluster, pods per plant, cluster per plant and pods length showed significant correlations (P≤ 0.05. In conclusion, the results showed that grain yield differ among cultivars and for its estimation, the prediction models showed determination coefficients highly dependable.

  11. Evaluating predictive models of software quality

    International Nuclear Information System (INIS)

    Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D

    2014-01-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  12. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  13. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  14. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  15. Theoretical modeling of critical temperature increase in metamaterial superconductors

    Science.gov (United States)

    Smolyaninov, Igor; Smolyaninova, Vera

    Recent experiments have demonstrated that the metamaterial approach is capable of drastic increase of the critical temperature Tc of epsilon near zero (ENZ) metamaterial superconductors. For example, tripling of the critical temperature has been observed in Al-Al2O3 ENZ core-shell metamaterials. Here, we perform theoretical modelling of Tc increase in metamaterial superconductors based on the Maxwell-Garnett approximation of their dielectric response function. Good agreement is demonstrated between theoretical modelling and experimental results in both aluminum and tin-based metamaterials. Taking advantage of the demonstrated success of this model, the critical temperature of hypothetic niobium, MgB2 and H2S-based metamaterial superconductors is evaluated. The MgB2-based metamaterial superconductors are projected to reach the liquid nitrogen temperature range. In the case of an H2S-based metamaterial Tc appears to reach 250 K. This work was supported in part by NSF Grant DMR-1104676 and the School of Emerging Technologies at Towson University.

  16. Critical Source Area Delineation: The representation of hydrology in effective erosion modeling.

    Science.gov (United States)

    Fowler, A.; Boll, J.; Brooks, E. S.; Boylan, R. D.

    2017-12-01

    Despite decades of conservation and millions of conservation dollars, nonpoint source sediment loading associated with agricultural disturbance continues to be a significant problem in many parts of the world. Local and national conservation organizations are interested in targeting critical source areas for control strategy implementation. Currently, conservation practices are selected and located based on the Revised Universal Soil Loss Equation (RUSLE) hillslope erosion modeling, and the National Resource Conservation Service will soon be transiting to the Watershed Erosion Predict Project (WEPP) model for the same purpose. We present an assessment of critical source areas targeted with RUSLE, WEPP and a regionally validated hydrology model, the Soil Moisture Routing (SMR) model, to compare the location of critical areas for sediment loading and the effectiveness of control strategies. The three models are compared for the Palouse dryland cropping region of the inland northwest, with un-calibrated analyses of the Kamiache watershed using publicly available soils, land-use and long-term simulated climate data. Critical source areas were mapped and the side-by-side comparison exposes the differences in the location and timing of runoff and erosion predictions. RUSLE results appear most sensitive to slope driving processes associated with infiltration excess. SMR captured saturation excess driven runoff events located at the toe slope position, while WEPP was able to capture both infiltration excess and saturation excess processes depending on soil type and management. A methodology is presented for down-scaling basin level screening to the hillslope management scale for local control strategies. Information on the location of runoff and erosion, driven by the runoff mechanism, is critical for effective treatment and conservation.

  17. Critical properties of the Kitaev-Heisenberg Model

    Science.gov (United States)

    Sizyuk, Yuriy; Price, Craig; Perkins, Natalia

    2013-03-01

    Collective behavior of local moments in Mott insulators in the presence of strong spin-orbit coupling is one of the most interesting questions in modern condensed matter physics. Here we study the finite temperature properties of the Kitaev-Heisenberg model which describe the interactions between the pseudospin J = 1 / 2 iridium moments on the honeycomb lattice. This model was suggested as a possible model to explain low-energy physics of AIr2O3 compounds. In our study we show that the Kitaev-Heisenberg model may be mapped into the six state clock model with an intermediate power-law phase at finite temperatures. In the framework of the Ginsburg-Landau theory, we provide an analysis of the critical properties of the finite-temperature ordering transitions. NSF grant DMR-1005932

  18. A new mechanistic model of critical heat flux in forced-convection subcooled boiling

    International Nuclear Information System (INIS)

    Alajbegovic, A.; Kurul, N.; Podowski, M.Z.; Drew, D.A.; Lahey, R.T. Jr.

    1997-10-01

    Because of its practical importance and various industrial applications, the process of subcooled flow boiling has attracted a lot of attention in the research community in the past. However, the existing models are primarily phenomenological and are based on correlating experimental data rather than on a first-principle analysis of the governing physical phenomena. Even though the mechanisms leading to critical heat flux (CHF) are very complex, the recent progress in the understanding of local phenomena of multiphase flow and heat transfer, combined with the development of mathematical models and advanced Computational Fluid Dynamics (CFD) methods, makes analytical predictions of CHF quite feasible. Various mechanisms leading to CHF in subcooled boiling have been investigated. A new model for the predictions of the onset of CHF has been developed. This new model has been coupled with the overall boiling channel model, numerically implemented in the CFX 4 computer code, tested and validated against the experimental data of Hino and Ueda. The predicted critical heat flux for various channel operating conditions shows good agreement with the measurements using the aforementioned closure laws for the various local phenomena governing nucleation and bubble departure from the wall. The observed differences are consistent with typical uncertainties associated with CHF data

  19. Capacity Prediction Model Based on Limited Priority Gap-Acceptance Theory at Multilane Roundabouts

    Directory of Open Access Journals (Sweden)

    Zhaowei Qu

    2014-01-01

    Full Text Available Capacity is an important design parameter for roundabouts, and it is the premise of computing their delay and queue. Roundabout capacity has been studied for decades, and empirical regression model and gap-acceptance model are the two main methods to predict it. Based on gap-acceptance theory, by considering the effect of limited priority, especially the relationship between limited priority factor and critical gap, a modified model was built to predict the roundabout capacity. We then compare the results between Raff’s method and maximum likelihood estimation (MLE method, and the MLE method was used to predict the critical gaps. Finally, the predicted capacities from different models were compared, with the observed capacity by field surveys, which verifies the performance of the proposed model.

  20. Specifications, Pre-Experimental Predictions, and Test Plate Characterization Information for the Prometheus Critical Experiments

    International Nuclear Information System (INIS)

    ML Zerkle; ME Meyers; SM Tarves; JJ Powers

    2006-01-01

    This report provides specifications, pre-experimental predictions, and test plate characterization information for a series of molybdenum (Mo), niobium (Nb), rhenium (Re), tantalum (Ta), and baseline critical experiments that were developed by the Naval Reactors Prime Contractor Team (NRPCT) for the Prometheus space reactor development project. In March 2004, the Naval Reactors program was assigned the responsibility to develop, design, deliver, and operationally support civilian space nuclear reactors for NASA's Project Prometheus. The NRPCT was formed to perform this work and consisted of engineers and scientists from the Naval Reactors (NR) Program prime contractors: Bettis Atomic Power Laboratory, Knolls Atomic Power Laboratory (KAPL), and Bechtel Plant Machinery Inc (BPMI). The NRPCT developed a series of clean benchmark critical experiments to address fundamental uncertainties in the neutron cross section data for Mo, Nb, Re, and Ta in fast, intermediate, and mixed neutron energy spectra. These experiments were to be performed by Los Alamos National Laboratory (LANL) using the Planet vertical lift critical assembly machine and were designed with a simple, geometrically clean, cylindrical configuration consisting of alternating layers of test, moderator/reflector, and fuel materials. Based on reprioritization of missions and funding within NASA, Naval Reactors and NASA discontinued their collaboration on Project Prometheus in September 2005. One critical experiment and eighteen subcritical handstacking experiments were completed prior to the termination of work in September 2005. Information on the Prometheus critical experiments and the test plates produced for these experiments are expected to be of value to future space reactor development programs and to integral experiments designed to address the fundamental neutron cross section uncertainties for these refractory metals. This information is being provided as an orderly closeout of NRPCT work on Project

  1. An analytical model for climatic predictions

    International Nuclear Information System (INIS)

    Njau, E.C.

    1990-12-01

    A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs

  2. An Anisotropic Hardening Model for Springback Prediction

    Science.gov (United States)

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  3. An Anisotropic Hardening Model for Springback Prediction

    International Nuclear Information System (INIS)

    Zeng, Danielle; Xia, Z. Cedric

    2005-01-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test

  4. Brittle Creep Failure, Critical Behavior, and Time-to-Failure Prediction of Concrete under Uniaxial Compression

    Directory of Open Access Journals (Sweden)

    Yingchong Wang

    2015-01-01

    Full Text Available Understanding the time-dependent brittle deformation behavior of concrete as a main building material is fundamental for the lifetime prediction and engineering design. Herein, we present the experimental measures of brittle creep failure, critical behavior, and the dependence of time-to-failure, on the secondary creep rate of concrete under sustained uniaxial compression. A complete evolution process of creep failure is achieved. Three typical creep stages are observed, including the primary (decelerating, secondary (steady state creep regime, and tertiary creep (accelerating creep stages. The time-to-failure shows sample-specificity although all samples exhibit a similar creep process. All specimens exhibit a critical power-law behavior with an exponent of −0.51 ± 0.06, approximately equal to the theoretical value of −1/2. All samples have a long-term secondary stage characterized by a constant strain rate that dominates the lifetime of a sample. The average creep rate expressed by the total creep strain over the lifetime (tf-t0 for each specimen shows a power-law dependence on the secondary creep rate with an exponent of −1. This could provide a clue to the prediction of the time-to-failure of concrete, based on the monitoring of the creep behavior at the steady stage.

  5. Predicting Plant-Accessible Water in the Critical Zone: Mountain Ecosystems in a Mediterranean Climate

    Science.gov (United States)

    Klos, P. Z.; Goulden, M.; Riebe, C. S.; Tague, C.; O'Geen, A. T.; Flinchum, B. A.; Safeeq, M.; Conklin, M. H.; Hart, S. C.; Asefaw Berhe, A.; Hartsough, P. C.; Holbrook, S.; Bales, R. C.

    2017-12-01

    Enhanced understanding of subsurface water storage, and the below-ground architecture and processes that create it, will advance our ability to predict how the impacts of climate change - including drought, forest mortality, wildland fire, and strained water security - will take form in the decades to come. Previous research has examined the importance of plant-accessible water in soil, but in upland landscapes within Mediterranean climates the soil is often only the upper extent of subsurface water storage. We draw insights from both this previous research and a case study of the Southern Sierra Critical Zone Observatory to: define attributes of subsurface storage, review observed patterns in its distribution, highlight nested methods for its estimation across scales, and showcase the fundamental processes controlling its formation. We observe that forest ecosystems at our sites subsist on lasting plant-accessible stores of subsurface water during the summer dry period and during multi-year droughts. This indicates that trees in these forest ecosystems are rooted deeply in the weathered, highly porous saprolite, which reaches up to 10-20 m beneath the surface. This confirms the importance of large volumes of subsurface water in supporting ecosystem resistance to climate and landscape change across a range of spatiotemporal scales. This research enhances the ability to predict the extent of deep subsurface storage across landscapes; aiding in the advancement of both critical zone science and the management of natural resources emanating from similar mountain ecosystems worldwide.

  6. Using plural modeling for predicting decisions made by adaptive adversaries

    International Nuclear Information System (INIS)

    Buede, Dennis M.; Mahoney, Suzanne; Ezell, Barry; Lathrop, John

    2012-01-01

    Incorporating an appropriate representation of the likelihood of terrorist decision outcomes into risk assessments associated with weapons of mass destruction attacks has been a significant problem for countries around the world. Developing these likelihoods gets at the heart of the most difficult predictive problems: human decision making, adaptive adversaries, and adversaries about which very little is known. A plural modeling approach is proposed that incorporates estimates of all critical uncertainties: who is the adversary and what skills and resources are available to him, what information is known to the adversary and what perceptions of the important facts are held by this group or individual, what does the adversary know about the countermeasure actions taken by the government in question, what are the adversary's objectives and the priorities of those objectives, what would trigger the adversary to start an attack and what kind of success does the adversary desire, how realistic is the adversary in estimating the success of an attack, how does the adversary make a decision and what type of model best predicts this decision-making process. A computational framework is defined to aggregate the predictions from a suite of models, based on this broad array of uncertainties. A validation approach is described that deals with a significant scarcity of data.

  7. Risk assessment and remedial policy evaluation using predictive modeling

    International Nuclear Information System (INIS)

    Linkov, L.; Schell, W.R.

    1996-01-01

    As a result of nuclear industry operation and accidents, large areas of natural ecosystems have been contaminated by radionuclides and toxic metals. Extensive societal pressure has been exerted to decrease the radiation dose to the population and to the environment. Thus, in making abatement and remediation policy decisions, not only economic costs but also human and environmental risk assessments are desired. This paper introduces a general framework for risk assessment and remedial policy evaluation using predictive modeling. Ecological risk assessment requires evaluation of the radionuclide distribution in ecosystems. The FORESTPATH model is used for predicting the radionuclide fate in forest compartments after deposition as well as for evaluating the efficiency of remedial policies. Time of intervention and radionuclide deposition profile was predicted as being crucial for the remediation efficiency. Risk assessment conducted for a critical group of forest users in Belarus shows that consumption of forest products (berries and mushrooms) leads to about 0.004% risk of a fatal cancer annually. Cost-benefit analysis for forest cleanup suggests that complete removal of organic layer is too expensive for application in Belarus and a better methodology is required. In conclusion, FORESTPATH modeling framework could have wide applications in environmental remediation of radionuclides and toxic metals as well as in dose reconstruction and, risk-assessment

  8. Model-based automated testing of critical PLC programs.

    CERN Document Server

    Fernández Adiego, B; Tournier, J-C; González Suárez, V M; Bliudze, S

    2014-01-01

    Testing of critical PLC (Programmable Logic Controller) programs remains a challenging task for control system engineers as it can rarely be automated. This paper proposes a model based approach which uses the BIP (Behavior, Interactions and Priorities) framework to perform automated testing of PLC programs developed with the UNICOS (UNified Industrial COntrol System) framework. This paper defines the translation procedure and rules from UNICOS to BIP which can be fully automated in order to hide the complexity of the underlying model from the control engineers. The approach is illustrated and validated through the study of a water treatment process.

  9. Development of a digital reactivity meter for criticality prediction and control rod worth evaluation in pressurized water reactors

    International Nuclear Information System (INIS)

    Kuramoto, Renato Y.R.; Miranda, Anselmo F.; Valladares, Gastao Lommez; Prado, Adelk C.

    2009-01-01

    In this work, we have proposed the development of a digital reactivity meter in order to monitor subcriticality continuously during criticality approach in a PWR. A subcritical reactivity meter can provide an easy prediction of the estimated critical point prior to reactor criticality, without complicated hand calculation. Moreover, in order to reduce the interval of the Physics Tests from the economical point of view, a subcritical reactivity meter can evaluate the control rod worth from direct subcriticality measurement. In other words, count rate of Source Range (SR) detector recorded during the criticality approach could be used for subcriticality evaluation or control rod worth evaluation. Basically, a digital reactivity meter is based on the inverse solution of the kinetic equations of a reactor with the external neutron source in one-point reactor model. There are some difficulties in the direct application of a digital reactivity meter to the subcriticality measurement. When the Inverse Kinetic method is applied to a sufficiently high power level or to a core without an external neutron source, the neutron source term may be neglected. When applied to a lower power level or in the sub-critical domain, however, the source effects must be taken in account. Furthermore, some treatments are needed in using the count rate of Source Range (SR) detector as input signal to the digital reactivity meter. To overcome these difficulties, we have proposed a digital reactivity meter combined with a methodology of the modified Neutron Source Multiplication (NSM) method with correction factors for subcriticality measurements in PWR. (author)

  10. Development of a digital reactivity meter for criticality prediction and control rod worth evaluation in pressurized water reactors

    Energy Technology Data Exchange (ETDEWEB)

    Kuramoto, Renato Y.R.; Miranda, Anselmo F.; Valladares, Gastao Lommez; Prado, Adelk C. [Eletrobras Termonuclear S.A. - ELETRONUCLEAR, Angra dos Reis, RJ (Brazil). Central Nuclear Almirante Alvaro Alberto], e-mail: kuramot@eletronuclear.gov.br

    2009-07-01

    In this work, we have proposed the development of a digital reactivity meter in order to monitor subcriticality continuously during criticality approach in a PWR. A subcritical reactivity meter can provide an easy prediction of the estimated critical point prior to reactor criticality, without complicated hand calculation. Moreover, in order to reduce the interval of the Physics Tests from the economical point of view, a subcritical reactivity meter can evaluate the control rod worth from direct subcriticality measurement. In other words, count rate of Source Range (SR) detector recorded during the criticality approach could be used for subcriticality evaluation or control rod worth evaluation. Basically, a digital reactivity meter is based on the inverse solution of the kinetic equations of a reactor with the external neutron source in one-point reactor model. There are some difficulties in the direct application of a digital reactivity meter to the subcriticality measurement. When the Inverse Kinetic method is applied to a sufficiently high power level or to a core without an external neutron source, the neutron source term may be neglected. When applied to a lower power level or in the sub-critical domain, however, the source effects must be taken in account. Furthermore, some treatments are needed in using the count rate of Source Range (SR) detector as input signal to the digital reactivity meter. To overcome these difficulties, we have proposed a digital reactivity meter combined with a methodology of the modified Neutron Source Multiplication (NSM) method with correction factors for subcriticality measurements in PWR. (author)

  11. Critical groups vs. representative person: dose calculations due to predicted releases from USEXA

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, N.L.D., E-mail: nelson.luiz@ctmsp.mar.mil.br [Centro Tecnologico da Marinha (CTM/SP), Sao Paulo, SP (Brazil); Rochedo, E.R.R., E-mail: elainerochedo@gmail.com [Instituto de Radiprotecao e Dosimetria (lRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Mazzilli, B.P., E-mail: mazzilli@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    The critical group cf Centro Experimental Aramar (CEA) site was previously defined based 00 the effluents releases to the environment resulting from the facilities already operational at CEA. In this work, effective doses are calculated to members of the critical group considering the predicted potential uranium releases from the Uranium Hexafluoride Production Plant (USEXA). Basically, this work studies the behavior of the resulting doses related to the type of habit data used in the analysis and two distinct situations are considered: (a) the utilization of average values obtained from official institutions (IBGE, IEA-SP, CNEN, IAEA) and from the literature; and (b) the utilization of the 95{sup tb} percentile of the values derived from distributions fit to the obtained habit data. The first option corresponds to the way that data was used for the definition of the critical group of CEA done in former assessments, while the second one corresponds to the use of data in deterministic assessments, as recommended by ICRP to estimate doses to the so--called 'representative person' . (author)

  12. Critical groups vs. representative person: dose calculations due to predicted releases from USEXA

    International Nuclear Information System (INIS)

    Ferreira, N.L.D.; Rochedo, E.R.R.; Mazzilli, B.P.

    2013-01-01

    The critical group cf Centro Experimental Aramar (CEA) site was previously defined based 00 the effluents releases to the environment resulting from the facilities already operational at CEA. In this work, effective doses are calculated to members of the critical group considering the predicted potential uranium releases from the Uranium Hexafluoride Production Plant (USEXA). Basically, this work studies the behavior of the resulting doses related to the type of habit data used in the analysis and two distinct situations are considered: (a) the utilization of average values obtained from official institutions (IBGE, IEA-SP, CNEN, IAEA) and from the literature; and (b) the utilization of the 95 tb percentile of the values derived from distributions fit to the obtained habit data. The first option corresponds to the way that data was used for the definition of the critical group of CEA done in former assessments, while the second one corresponds to the use of data in deterministic assessments, as recommended by ICRP to estimate doses to the so--called 'representative person' . (author)

  13. Web tools for predictive toxicology model building.

    Science.gov (United States)

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  14. Computational multi-fluid dynamics predictions of critical heat flux in boiling flow

    International Nuclear Information System (INIS)

    Mimouni, S.; Baudry, C.; Guingo, M.; Lavieville, J.; Merigoux, N.; Mechitoua, N.

    2016-01-01

    Highlights: • A new mechanistic model dedicated to DNB has been implemented in the Neptune_CFD code. • The model has been validated against 150 tests. • Neptune_CFD code is a CFD tool dedicated to boiling flows. - Abstract: Extensive efforts have been made in the last five decades to evaluate the boiling heat transfer coefficient and the critical heat flux in particular. Boiling crisis remains a major limiting phenomenon for the analysis of operation and safety of both nuclear reactors and conventional thermal power systems. As a consequence, models dedicated to boiling flows have being improved. For example, Reynolds Stress Transport Model, polydispersion and two-phase flow wall law have been recently implemented. In a previous work, we have evaluated computational fluid dynamics results against single-phase liquid water tests equipped with a mixing vane and against two-phase boiling cases. The objective of this paper is to propose a new mechanistic model in a computational multi-fluid dynamics tool leading to wall temperature excursion and onset of boiling crisis. Critical heat flux is calculated against 150 tests and the mean relative error between calculations and experimental values is equal to 8.3%. The model tested covers a large physics scope in terms of mass flux, pressure, quality and channel diameter. Water and R12 refrigerant fluid are considered. Furthermore, it was found that the sensitivity to the grid refinement was acceptable.

  15. Computational multi-fluid dynamics predictions of critical heat flux in boiling flow

    Energy Technology Data Exchange (ETDEWEB)

    Mimouni, S., E-mail: stephane.mimouni@edf.fr; Baudry, C.; Guingo, M.; Lavieville, J.; Merigoux, N.; Mechitoua, N.

    2016-04-01

    Highlights: • A new mechanistic model dedicated to DNB has been implemented in the Neptune-CFD code. • The model has been validated against 150 tests. • Neptune-CFD code is a CFD tool dedicated to boiling flows. - Abstract: Extensive efforts have been made in the last five decades to evaluate the boiling heat transfer coefficient and the critical heat flux in particular. Boiling crisis remains a major limiting phenomenon for the analysis of operation and safety of both nuclear reactors and conventional thermal power systems. As a consequence, models dedicated to boiling flows have being improved. For example, Reynolds Stress Transport Model, polydispersion and two-phase flow wall law have been recently implemented. In a previous work, we have evaluated computational fluid dynamics results against single-phase liquid water tests equipped with a mixing vane and against two-phase boiling cases. The objective of this paper is to propose a new mechanistic model in a computational multi-fluid dynamics tool leading to wall temperature excursion and onset of boiling crisis. Critical heat flux is calculated against 150 tests and the mean relative error between calculations and experimental values is equal to 8.3%. The model tested covers a large physics scope in terms of mass flux, pressure, quality and channel diameter. Water and R12 refrigerant fluid are considered. Furthermore, it was found that the sensitivity to the grid refinement was acceptable.

  16. Predictions of models for environmental radiological assessment

    International Nuclear Information System (INIS)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa; Mahler, Claudio Fernando

    2011-01-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for 137 Cs and 60 Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  17. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time......). Five technical and economic aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality...

  18. Lyapunov exponent and criticality in the Hamiltonian mean field model

    Science.gov (United States)

    Filho, L. H. Miranda; Amato, M. A.; Rocha Filho, T. M.

    2018-03-01

    We investigate the dependence of the largest Lyapunov exponent (LLE) of an N-particle self-gravitating ring model at equilibrium with respect to the number of particles and its dependence on energy. This model has a continuous phase-transition from a ferromagnetic to homogeneous phase, and we numerically confirm with large scale simulations the existence of a critical exponent associated to the LLE, although at variance with the theoretical estimate. The existence of strong chaos in the magnetized state evidenced by a positive Lyapunov exponent is explained by the coupling of individual particle oscillations to the diffusive motion of the center of mass of the system and also results in a change of the scaling of the LLE with the number of particles. We also discuss thoroughly for the model the validity and limits of the approximations made by a geometrical model for their analytic estimate.

  19. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  20. Effective modelling for predictive analytics in data science ...

    African Journals Online (AJOL)

    Effective modelling for predictive analytics in data science. ... the nearabsence of empirical or factual predictive analytics in the mainstream research going on ... Keywords: Predictive Analytics, Big Data, Business Intelligence, Project Planning.

  1. Modelación de episodios críticos de contaminación por material particulado (PM10 en Santiago de Chile: Comparación de la eficiencia predictiva de los modelos paramétricos y no paramétricos Modeling critical episodes of air pollution by PM10 in Santiago, Chile: Comparison of the predictive efficiency of parametric and non-parametric statistical models

    Directory of Open Access Journals (Sweden)

    Sergio A. Alvarado

    2010-12-01

    Full Text Available Objetivo: Evaluar la eficiencia predictiva de modelos estadísticos paramétricos y no paramétricos para predecir episodios críticos de contaminación por material particulado PM10 del día siguiente, que superen en Santiago de Chile la norma de calidad diaria. Una predicción adecuada de tales episodios permite a la autoridad decretar medidas restrictivas que aminoren la gravedad del episodio, y consecuentemente proteger la salud de la comunidad. Método: Se trabajó con las concentraciones de material particulado PM10 registradas en una estación asociada a la red de monitorización de la calidad del aire MACAM-2, considerando 152 observaciones diarias de 14 variables, y con información meteorológica registrada durante los años 2001 a 2004. Se ajustaron modelos estadísticos paramétricos Gamma usando el paquete estadístico STATA v11, y no paramétricos usando una demo del software estadístico MARS v 2.0 distribuida por Salford-Systems. Resultados: Ambos métodos de modelación presentan una alta correlación entre los valores observados y los predichos. Los modelos Gamma presentan mejores aciertos que MARS para las concentraciones de PM10 con valores Objective: To evaluate the predictive efficiency of two statistical models (one parametric and the other non-parametric to predict critical episodes of air pollution exceeding daily air quality standards in Santiago, Chile by using the next day PM10 maximum 24h value. Accurate prediction of such episodes would allow restrictive measures to be applied by health authorities to reduce their seriousness and protect the community´s health. Methods: We used the PM10 concentrations registered by a station of the Air Quality Monitoring Network (152 daily observations of 14 variables and meteorological information gathered from 2001 to 2004. To construct predictive models, we fitted a parametric Gamma model using STATA v11 software and a non-parametric MARS model by using a demo version of Salford

  2. Integrating artificial neural networks and empirical correlations for the prediction of water-subcooled critical heat flux

    International Nuclear Information System (INIS)

    Mazzola, A.

    1997-01-01

    The critical heat flux (CHF) is an important parameter for the design of nuclear reactors, heat exchangers and other boiling heat transfer units. Recently, the CHF in water-subcooled flow boiling at high mass flux and subcooling has been thoroughly studied in relation to the cooling of high-heat-flux components in thermonuclear fusion reactors. Due to the specific thermal-hydraulic situation, very few of the existing correlations, originally developed for operating conditions typical of pressurized water reactors, are able to provide consistent predictions of water-subcooled-flow-boiling CHF at high heat fluxes. Therefore, alternative predicting techniques are being investigated. Among these, artificial neural networks (ANN) have the advantage of not requiring a formal model structure to fit the experimental data; however, their main drawbacks are the loss of model transparency ('black-box' character) and the lack of any indicator for evaluating accuracy and reliability of the ANN answer when 'never-seen' patterns are presented. In the present work, the prediction of CHF is approached by a hybrid system which couples a heuristic correlation with a neural network. The ANN role is to predict a datum-dependent parameter required by the analytical correlation; ; this parameter was instead set to a constant value obtained by usual best-fitting techniques when a pure analytical approach was adopted. Upper and lower boundaries can be possibly assigned to the parameter value, thus avoiding the case of unexpected and unpredictable answer failure. The present approach maintains the advantage of the analytical model analysis, and it partially overcomes the 'black-box' character typical of the straight application of ANNs because the neural network role is limited to the correlation tuning. The proposed methodology allows us to achieve accurate results and it is likely to be suitable for thermal-hydraulic and heat transfer data processing. (author)

  3. A hybrid model to predict the onset of gas entrainment with surface tension effects

    International Nuclear Information System (INIS)

    Saleh, W.; Bowden, R.C.; Hassan, I.G.; Kadem, L.

    2008-01-01

    The onset of gas entrainment, in a single downward oriented discharge from a stratified gas-liquid region with was modeled. The assumptions made in the development of the model reduced the problem to that of a potential flow. The discharge was modeled as a point-sink. Through use of the Kelvin-Laplace equation the model included the effects of surface tension. The resulting model required further knowledge of the flow field, specifically the dip radius of curvature prior to the onset of gas entrainment. The dip shape and size was investigated experimentally and correlations were provided to characterize the dip in terms of the discharge Froude number. The experimental correlation was used in conjunction with the theoretical model to predict the critical height. The results showed that by including surface tension effects the predicted critical height showed excellent agreement with experimental data. Surface tension reduces the critical height through the Bond number

  4. Predicting fatigue and psychophysiological test performance from speech for safety critical environments

    Directory of Open Access Journals (Sweden)

    Khan Richard Baykaner

    2015-08-01

    Full Text Available Automatic systems for estimating operator fatigue have application in safety-critical environments. A system which could estimate level of fatigue from speech would have application in domains where operators engage in regular verbal communication as part of their duties. Previous studies on the prediction of fatigue from speech have been limited because of their reliance on subjective ratings and because they lack comparison to other methods for assessing fatigue. In this paper we present an analysis of voice recordings and psychophysiological test scores collected from seven aerospace personnel during a training task in which they remained awake for 60 hours. We show that voice features and test scores are affected by both the total time spent awake and the time position within each subject’s circadian cycle. However, we show that time spent awake and time of day information are poor predictors of the test results; while voice features can give good predictions of the psychophysiological test scores and sleep latency. Mean absolute errors of prediction are possible within about 17.5% for sleep latency and 5-12% for test scores. We discuss the implications for the use of voice as a means to monitor the effects of fatigue on cognitive performance in practical applications.

  5. Review on modeling and simulation of interdependent critical infrastructure systems

    International Nuclear Information System (INIS)

    Ouyang, Min

    2014-01-01

    Modern societies are becoming increasingly dependent on critical infrastructure systems (CISs) to provide essential services that support economic prosperity, governance, and quality of life. These systems are not alone but interdependent at multiple levels to enhance their overall performance. However, recent worldwide events such as the 9/11 terrorist attack, Gulf Coast hurricanes, the Chile and Japanese earthquakes, and even heat waves have highlighted that interdependencies among CISs increase the potential for cascading failures and amplify the impact of both large and small scale initial failures into events of catastrophic proportions. To better understand CISs to support planning, maintenance and emergency decision making, modeling and simulation of interdependencies across CISs has recently become a key field of study. This paper reviews the studies in the field and broadly groups the existing modeling and simulation approaches into six types: empirical approaches, agent based approaches, system dynamics based approaches, economic theory based approaches, network based approaches, and others. Different studies for each type of the approaches are categorized and reviewed in terms of fundamental principles, such as research focus, modeling rationale, and the analysis method, while different types of approaches are further compared according to several criteria, such as the notion of resilience. Finally, this paper offers future research directions and identifies critical challenges in the field. - Highlights: • Modeling approaches on interdependent critical infrastructure systems are reviewed. • I mainly review empirical, agent-based, system-dynamics, economic, network approaches. • Studies by each approach are sorted out in terms of fundamental principles. • Different approaches are further compared with resilience as the main criterion

  6. Setting the vision: applied patient-reported outcomes and smart, connected digital healthcare systems to improve patient-centered outcomes prediction in critical illness.

    Science.gov (United States)

    Wysham, Nicholas G; Abernethy, Amy P; Cox, Christopher E

    2014-10-01

    Prediction models in critical illness are generally limited to short-term mortality and uncommonly include patient-centered outcomes. Current outcome prediction tools are also insensitive to individual context or evolution in healthcare practice, potentially limiting their value over time. Improved prognostication of patient-centered outcomes in critical illness could enhance decision-making quality in the ICU. Patient-reported outcomes have emerged as precise methodological measures of patient-centered variables and have been successfully employed using diverse platforms and technologies, enhancing the value of research in critical illness survivorship and in direct patient care. The learning health system is an emerging ideal characterized by integration of multiple data sources into a smart and interconnected health information technology infrastructure with the goal of rapidly optimizing patient care. We propose a vision of a smart, interconnected learning health system with integrated electronic patient-reported outcomes to optimize patient-centered care, including critical care outcome prediction. A learning health system infrastructure integrating electronic patient-reported outcomes may aid in the management of critical illness-associated conditions and yield tools to improve prognostication of patient-centered outcomes in critical illness.

  7. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  8. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  9. Mathematical models for indoor radon prediction

    International Nuclear Information System (INIS)

    Malanca, A.; Pessina, V.; Dallara, G.

    1995-01-01

    It is known that the indoor radon (Rn) concentration can be predicted by means of mathematical models. The simplest model relies on two variables only: the Rn source strength and the air exchange rate. In the Lawrence Berkeley Laboratory (LBL) model several environmental parameters are combined into a complex equation; besides, a correlation between the ventilation rate and the Rn entry rate from the soil is admitted. The measurements were carried out using activated carbon canisters. Seventy-five measurements of Rn concentrations were made inside two rooms placed on the second floor of a building block. One of the rooms had a single-glazed window whereas the other room had a double pane window. During three different experimental protocols, the mean Rn concentration was always higher into the room with a double-glazed window. That behavior can be accounted for by the simplest model. A further set of 450 Rn measurements was collected inside a ground-floor room with a grounding well in it. This trend maybe accounted for by the LBL model

  10. Towards predictive models for transitionally rough surfaces

    Science.gov (United States)

    Abderrahaman-Elena, Nabil; Garcia-Mayoral, Ricardo

    2017-11-01

    We analyze and model the previously presented decomposition for flow variables in DNS of turbulence over transitionally rough surfaces. The flow is decomposed into two contributions: one produced by the overlying turbulence, which has no footprint of the surface texture, and one induced by the roughness, which is essentially the time-averaged flow around the surface obstacles, but modulated in amplitude by the first component. The roughness-induced component closely resembles the laminar steady flow around the roughness elements at the same non-dimensional roughness size. For small - yet transitionally rough - textures, the roughness-free component is essentially the same as over a smooth wall. Based on these findings, we propose predictive models for the onset of the transitionally rough regime. Project supported by the Engineering and Physical Sciences Research Council (EPSRC).

  11. Predicting future glacial lakes in Austria using different modelling approaches

    Science.gov (United States)

    Otto, Jan-Christoph; Helfricht, Kay; Prasicek, Günther; Buckel, Johannes; Keuschnig, Markus

    2017-04-01

    Glacier retreat is one of the most apparent consequences of temperature rise in the 20th and 21th centuries in the European Alps. In Austria, more than 240 new lakes have formed in glacier forefields since the Little Ice Age. A similar signal is reported from many mountain areas worldwide. Glacial lakes can constitute important environmental and socio-economic impacts on high mountain systems including water resource management, sediment delivery, natural hazards, energy production and tourism. Their development significantly modifies the landscape configuration and visual appearance of high mountain areas. Knowledge on the location, number and extent of these future lakes can be used to assess potential impacts on high mountain geo-ecosystems and upland-lowland interactions. Information on new lakes is critical to appraise emerging threads and potentials for society. The recent development of regional ice thickness models and their combination with high resolution glacier surface data allows predicting the topography below current glaciers by subtracting ice thickness from glacier surface. Analyzing these modelled glacier bed surfaces reveals overdeepenings that represent potential locations for future lakes. In order to predict the location of future glacial lakes below recent glaciers in the Austrian Alps we apply different ice thickness models using high resolution terrain data and glacier outlines. The results are compared and validated with ice thickness data from geophysical surveys. Additionally, we run the models on three different glacier extents provided by the Austrian Glacier Inventories from 1969, 1998 and 2006. Results of this historical glacier extent modelling are compared to existing glacier lakes and discussed focusing on geomorphological impacts on lake evolution. We discuss model performance and observed differences in the results in order to assess the approach for a realistic prediction of future lake locations. The presentation delivers

  12. Resource-estimation models and predicted discovery

    International Nuclear Information System (INIS)

    Hill, G.W.

    1982-01-01

    Resources have been estimated by predictive extrapolation from past discovery experience, by analogy with better explored regions, or by inference from evidence of depletion of targets for exploration. Changes in technology and new insights into geological mechanisms have occurred sufficiently often in the long run to form part of the pattern of mature discovery experience. The criterion, that a meaningful resource estimate needs an objective measure of its precision or degree of uncertainty, excludes 'estimates' based solely on expert opinion. This is illustrated by development of error measures for several persuasive models of discovery and production of oil and gas in USA, both annually and in terms of increasing exploration effort. Appropriate generalizations of the models resolve many points of controversy. This is illustrated using two USA data sets describing discovery of oil and of U 3 O 8 ; the latter set highlights an inadequacy of available official data. Review of the oil-discovery data set provides a warrant for adjusting the time-series prediction to a higher resource figure for USA petroleum. (author)

  13. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  14. An Operational Model for the Prediction of Jet Blast

    Science.gov (United States)

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  15. New model for burnout prediction in channels of various cross-section

    Energy Technology Data Exchange (ETDEWEB)

    Bobkov, V.P.; Kozina, N.V.; Vinogrado, V.N.; Zyatnina, O.A. [Institute of Physics and Power Engineering, Kaluga (Russian Federation)

    1995-09-01

    The model developed to predict a critical heat flux (CHF) in various channels is presented together with the results of data analysis. A model is the realization of relative method of CHF describing based on the data for round tube and on the system of correction factors. The results of data description presented here are for rectangular and triangular channels, annuli and rod bundles.

  16. A model to predict stream water temperature across the conterminous USA

    Science.gov (United States)

    Catalina Segura; Peter Caldwell; Ge Sun; Steve McNulty; Yang Zhang

    2014-01-01

    Stream water temperature (ts) is a critical water quality parameter for aquatic ecosystems. However, ts records are sparse or nonexistent in many river systems. In this work, we present an empirical model to predict ts at the site scale across the USA. The model, derived using data from 171 reference sites selected from the Geospatial Attributes of Gages for Evaluating...

  17. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  18. Self-Organized Criticality in an Anisotropic Earthquake Model

    Science.gov (United States)

    Li, Bin-Quan; Wang, Sheng-Jun

    2018-03-01

    We have made an extensive numerical study of a modified model proposed by Olami, Feder, and Christensen to describe earthquake behavior. Two situations were considered in this paper. One situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly not averagely and keeps itself to zero. The other situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly and keeps some energy for itself instead of reset to zero. Different boundary conditions were considered as well. By analyzing the distribution of earthquake sizes, we found that self-organized criticality can be excited only in the conservative case or the approximate conservative case in the above situations. Some evidence indicated that the critical exponent of both above situations and the original OFC model tend to the same result in the conservative case. The only difference is that the avalanche size in the original model is bigger. This result may be closer to the real world, after all, every crust plate size is different. Supported by National Natural Science Foundation of China under Grant Nos. 11675096 and 11305098, the Fundamental Research Funds for the Central Universities under Grant No. GK201702001, FPALAB-SNNU under Grant No. 16QNGG007, and Interdisciplinary Incubation Project of SNU under Grant No. 5

  19. Reliable critical sized defect rodent model for cleft palate research.

    Science.gov (United States)

    Mostafa, Nesrine Z; Doschak, Michael R; Major, Paul W; Talwar, Reena

    2014-12-01

    Suitable animal models are necessary to test the efficacy of new bone grafting therapies in cleft palate surgery. Rodent models of cleft palate are available but have limitations. This study compared and modified mid-palate cleft (MPC) and alveolar cleft (AC) models to determine the most reliable and reproducible model for bone grafting studies. Published MPC model (9 × 5 × 3 mm(3)) lacked sufficient information for tested rats. Our initial studies utilizing AC model (7 × 4 × 3 mm(3)) in 8 and 16 weeks old Sprague Dawley (SD) rats revealed injury to adjacent structures. After comparing anteroposterior and transverse maxillary dimensions in 16 weeks old SD and Wistar rats, virtual planning was performed to modify MPC and AC defects dimensions, taking the adjacent structures into consideration. Modified MPC (7 × 2.5 × 1 mm(3)) and AC (5 × 2.5 × 1 mm(3)) defects were employed in 16 weeks old Wistar rats and healing was monitored by micro-computed tomography and histology. Maxillary dimensions in SD and Wistar rats were not significantly different. Preoperative virtual planning enhanced postoperative surgical outcomes. Bone healing occurred at defect margin leaving central bone void confirming the critical size nature of the modified MPC and AC defects. Presented modifications for MPC and AC models created clinically relevant and reproducible defects. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  20. Curing critical links in oscillator networks as power flow models

    International Nuclear Information System (INIS)

    Rohden, Martin; Meyer-Ortmanns, Hildegard; Witthaut, Dirk; Timme, Marc

    2017-01-01

    Modern societies crucially depend on the robust supply with electric energy so that blackouts of power grids can have far reaching consequences. Typically, large scale blackouts take place after a cascade of failures: the failure of a single infrastructure component, such as a critical transmission line, results in several subsequent failures that spread across large parts of the network. Improving the robustness of a network to prevent such secondary failures is thus key for assuring a reliable power supply. In this article we analyze the nonlocal rerouting of power flows after transmission line failures for a simplified AC power grid model and compare different strategies to improve network robustness. We identify critical links in the grid and compute alternative pathways to quantify the grid’s redundant capacity and to find bottlenecks along the pathways. Different strategies are developed and tested to increase transmission capacities to restore stability with respect to transmission line failures. We show that local and nonlocal strategies typically perform alike: one can equally well cure critical links by providing backup capacities locally or by extending the capacities of bottleneck links at remote locations. (paper)

  1. Data driven propulsion system weight prediction model

    Science.gov (United States)

    Gerth, Richard J.

    1994-10-01

    The objective of the research was to develop a method to predict the weight of paper engines, i.e., engines that are in the early stages of development. The impetus for the project was the Single Stage To Orbit (SSTO) project, where engineers need to evaluate alternative engine designs. Since the SSTO is a performance driven project the performance models for alternative designs were well understood. The next tradeoff is weight. Since it is known that engine weight varies with thrust levels, a model is required that would allow discrimination between engines that produce the same thrust. Above all, the model had to be rooted in data with assumptions that could be justified based on the data. The general approach was to collect data on as many existing engines as possible and build a statistical model of the engines weight as a function of various component performance parameters. This was considered a reasonable level to begin the project because the data would be readily available, and it would be at the level of most paper engines, prior to detailed component design.

  2. Predictive modeling of emergency cesarean delivery.

    Directory of Open Access Journals (Sweden)

    Carlos Campillo-Artero

    Full Text Available To increase discriminatory accuracy (DA for emergency cesarean sections (ECSs.We prospectively collected data on and studied all 6,157 births occurring in 2014 at four public hospitals located in three different autonomous communities of Spain. To identify risk factors (RFs for ECS, we used likelihood ratios and logistic regression, fitted a classification tree (CTREE, and analyzed a random forest model (RFM. We used the areas under the receiver-operating-characteristic (ROC curves (AUCs to assess their DA.The magnitude of the LR+ for all putative individual RFs and ORs in the logistic regression models was low to moderate. Except for parity, all putative RFs were positively associated with ECS, including hospital fixed-effects and night-shift delivery. The DA of all logistic models ranged from 0.74 to 0.81. The most relevant RFs (pH, induction, and previous C-section in the CTREEs showed the highest ORs in the logistic models. The DA of the RFM and its most relevant interaction terms was even higher (AUC = 0.94; 95% CI: 0.93-0.95.Putative fetal, maternal, and contextual RFs alone fail to achieve reasonable DA for ECS. It is the combination of these RFs and the interactions between them at each hospital that make it possible to improve the DA for the type of delivery and tailor interventions through prediction to improve the appropriateness of ECS indications.

  3. Ventromedial Frontal Cortex Is Critical for Guiding Attention to Reward-Predictive Visual Features in Humans.

    Science.gov (United States)

    Vaidya, Avinash R; Fellows, Lesley K

    2015-09-16

    Adaptively interacting with our environment requires extracting information that will allow us to successfully predict reward. This can be a challenge, particularly when there are many candidate cues, and when rewards are probabilistic. Recent work has demonstrated that visual attention is allocated to stimulus features that have been associated with reward on previous trials. The ventromedial frontal lobe (VMF) has been implicated in learning in dynamic environments of this kind, but the mechanism by which this region influences this process is not clear. Here, we hypothesized that the VMF plays a critical role in guiding attention to reward-predictive stimulus features based on feedback. We tested the effects of VMF damage in human subjects on a visual search task in which subjects were primed to attend to task-irrelevant colors associated with different levels of reward, incidental to the search task. Consistent with previous work, we found that distractors had a greater influence on reaction time when they appeared in colors associated with high reward in the previous trial compared with colors associated with low reward in healthy control subjects and patients with prefrontal damage sparing the VMF. However, this reward modulation of attentional priming was absent in patients with VMF damage. Thus, an intact VMF is necessary for directing attention based on experience with cue-reward associations. We suggest that this region plays a role in selecting reward-predictive cues to facilitate future learning. There has been a swell of interest recently in the ventromedial frontal cortex (VMF), a brain region critical to associative learning. However, the underlying mechanism by which this region guides learning is not well understood. Here, we tested the effects of damage to this region in humans on a task in which rewards were linked incidentally to visual features, resulting in trial-by-trial attentional priming. Controls and subjects with prefrontal damage

  4. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...... and related to the uncertainty of the impulse response coefficients. The simulations can be used to benchmark l2 MPC against FIR based robust MPC as well as to estimate the maximum performance improvements by robust MPC....

  5. Importance of critical micellar concentration for the prediction of solubility enhancement in biorelevant media.

    Science.gov (United States)

    Ottaviani, G; Wendelspiess, S; Alvarez-Sánchez, R

    2015-04-06

    This study evaluated if the intrinsic surface properties of compounds are related to the solubility enhancement (SE) typically observed in biorelevant media like fasted state simulated intestinal fluids (FaSSIF). The solubility of 51 chemically diverse compounds was measured in FaSSIF and in phosphate buffer and the surface activity parameters were determined. This study showed that the compound critical micellar concentration parameter (CMC) correlates strongly with the solubility enhancement (SE) observed in FaSSIF compared to phosphate buffer. Thus, the intrinsic capacity of molecules to form micelles is also a determinant for each compound's affinity to the micelles of biorelevant surfactants. CMC correlated better with SE than lipophilicity (logD), especially over the logD range typically covered by drugs (2 < logD < 4). CMC can become useful to guide drug discovery scientists to better diagnose, improve, and predict solubility in biorelevant media, thereby enhancing oral bioavailability of drug candidates.

  6. Infarct volume predicts critical care needs in stroke patients treated with intravenous thrombolysis

    Energy Technology Data Exchange (ETDEWEB)

    Faigle, Roland; Marsh, Elisabeth B.; Llinas, Rafael H.; Urrutia, Victor C. [Johns Hopkins University School of Medicine, Department of Neurology, Baltimore, MD (United States); Wozniak, Amy W. [Johns Hopkins University, Department of Biostatistics, Bloomberg School of Public Health, Baltimore, MD (United States)

    2014-10-26

    Patients receiving intravenous thrombolysis with recombinant tissue plasminogen activator (IVT) for ischemic stroke are monitored in an intensive care unit (ICU) or a comparable unit capable of ICU interventions due to the high frequency of standardized neurological exams and vital sign checks. The present study evaluates quantitative infarct volume on early post-IVT MRI as a predictor of critical care needs and aims to identify patients who may not require resource intense monitoring. We identified 46 patients who underwent MRI within 6 h of IVT. Infarct volume was measured using semiautomated software. Logistic regression and receiver operating characteristics (ROC) analysis were used to determine factors associated with ICU needs. Infarct volume was an independent predictor of ICU need after adjusting for age, sex, race, systolic blood pressure, NIH Stroke Scale (NIHSS), and coronary artery disease (odds ratio 1.031 per cm{sup 3} increase in volume, 95 % confidence interval [CI] 1.004-1.058, p = 0.024). The ROC curve with infarct volume alone achieved an area under the curve (AUC) of 0.766 (95 % CI 0.605-0.927), while the AUC was 0.906 (95 % CI 0.814-0.998) after adjusting for race, systolic blood pressure, and NIHSS. Maximum Youden index calculations identified an optimal infarct volume cut point of 6.8 cm{sup 3} (sensitivity 75.0 %, specificity 76.7 %). Infarct volume greater than 3 cm{sup 3} predicted need for critical care interventions with 81.3 % sensitivity and 66.7 % specificity. Infarct volume may predict needs for ICU monitoring and interventions in stroke patients treated with IVT. (orig.)

  7. Infarct volume predicts critical care needs in stroke patients treated with intravenous thrombolysis

    International Nuclear Information System (INIS)

    Faigle, Roland; Marsh, Elisabeth B.; Llinas, Rafael H.; Urrutia, Victor C.; Wozniak, Amy W.

    2015-01-01

    Patients receiving intravenous thrombolysis with recombinant tissue plasminogen activator (IVT) for ischemic stroke are monitored in an intensive care unit (ICU) or a comparable unit capable of ICU interventions due to the high frequency of standardized neurological exams and vital sign checks. The present study evaluates quantitative infarct volume on early post-IVT MRI as a predictor of critical care needs and aims to identify patients who may not require resource intense monitoring. We identified 46 patients who underwent MRI within 6 h of IVT. Infarct volume was measured using semiautomated software. Logistic regression and receiver operating characteristics (ROC) analysis were used to determine factors associated with ICU needs. Infarct volume was an independent predictor of ICU need after adjusting for age, sex, race, systolic blood pressure, NIH Stroke Scale (NIHSS), and coronary artery disease (odds ratio 1.031 per cm 3 increase in volume, 95 % confidence interval [CI] 1.004-1.058, p = 0.024). The ROC curve with infarct volume alone achieved an area under the curve (AUC) of 0.766 (95 % CI 0.605-0.927), while the AUC was 0.906 (95 % CI 0.814-0.998) after adjusting for race, systolic blood pressure, and NIHSS. Maximum Youden index calculations identified an optimal infarct volume cut point of 6.8 cm 3 (sensitivity 75.0 %, specificity 76.7 %). Infarct volume greater than 3 cm 3 predicted need for critical care interventions with 81.3 % sensitivity and 66.7 % specificity. Infarct volume may predict needs for ICU monitoring and interventions in stroke patients treated with IVT. (orig.)

  8. Methodology for Designing Models Predicting Success of Infertility Treatment

    OpenAIRE

    Alireza Zarinara; Mohammad Mahdi Akhondi; Hojjat Zeraati; Koorsh Kamali; Kazem Mohammad

    2016-01-01

    Abstract Background: The prediction models for infertility treatment success have presented since 25 years ago. There are scientific principles for designing and applying the prediction models that is also used to predict the success rate of infertility treatment. The purpose of this study is to provide basic principles for designing the model to predic infertility treatment success. Materials and Methods: In this paper, the principles for developing predictive models are explained and...

  9. Monte Carlo method for critical systems in infinite volume: The planar Ising model.

    Science.gov (United States)

    Herdeiro, Victor; Doyon, Benjamin

    2016-10-01

    In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three-, and four-point of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.

  10. A computation method for mass flowrate predictions in critical flows of initially subcooled liquid in long channels

    International Nuclear Information System (INIS)

    Celata, G.P.; D'Annibale, F.; Farello, G.E.

    1985-01-01

    It is suggested a fast and accurate computation method for the prediction of mass flowrate in critical flows initially subcooled liquid from ''long'' discharge channels (high LID values). Starting from a previous very simple correlation proposed by the authors, further improvements in the model enable to widen the method reliability up to initial saturation conditions. A comparison of computed values with 145 experimental data regarding several investigations carried out at the Heat Transfer Laboratory (TERM/ISP, ENEA Casaccia) shows an excellent agreement. The computed data shifting from experimental ones is within ±10% for almost all data, with a slight increase towards low inlet subcoolings. The average error, for all the considered data, is 4,6%

  11. Self-organised criticality in the evolution of a thermodynamic model of rodent thermoregulatory huddling.

    Directory of Open Access Journals (Sweden)

    Stuart P Wilson

    2017-01-01

    Full Text Available A thermodynamic model of thermoregulatory huddling interactions between endotherms is developed. The model is presented as a Monte Carlo algorithm in which animals are iteratively exchanged between groups, with a probability of exchanging groups defined in terms of the temperature of the environment and the body temperatures of the animals. The temperature-dependent exchange of animals between groups is shown to reproduce a second-order critical phase transition, i.e., a smooth switch to huddling when the environment gets colder, as measured in recent experiments. A peak in the rate at which group sizes change, referred to as pup flow, is predicted at the critical temperature of the phase transition, consistent with a thermodynamic description of huddling, and with a description of the huddle as a self-organising system. The model was subjected to a simple evolutionary procedure, by iteratively substituting the physiologies of individuals that fail to balance the costs of thermoregulation (by huddling in groups with the costs of thermogenesis (by contributing heat. The resulting tension between cooperative and competitive interactions was found to generate a phenomenon called self-organised criticality, as evidenced by the emergence of avalanches in fitness that propagate across many generations. The emergence of avalanches reveals how huddling can introduce correlations in fitness between individuals and thereby constrain evolutionary dynamics. Finally, a full agent-based model of huddling interactions is also shown to generate criticality when subjected to the same evolutionary pressures. The agent-based model is related to the Monte Carlo model in the way that a Vicsek model is related to an Ising model in statistical physics. Huddling therefore presents an opportunity to use thermodynamic theory to study an emergent adaptive animal behaviour. In more general terms, huddling is proposed as an ideal system for investigating the interaction

  12. Finite Unification: Theory, Models and Predictions

    CERN Document Server

    Heinemeyer, S; Zoupanos, G

    2011-01-01

    All-loop Finite Unified Theories (FUTs) are very interesting N=1 supersymmetric Grand Unified Theories (GUTs) realising an old field theory dream, and moreover have a remarkable predictive power due to the required reduction of couplings. The reduction of the dimensionless couplings in N=1 GUTs is achieved by searching for renormalization group invariant (RGI) relations among them holding beyond the unification scale. Finiteness results from the fact that there exist RGI relations among dimensional couplings that guarantee the vanishing of all beta-functions in certain N=1 GUTs even to all orders. Furthermore developments in the soft supersymmetry breaking sector of N=1 GUTs and FUTs lead to exact RGI relations, i.e. reduction of couplings, in this dimensionful sector of the theory, too. Based on the above theoretical framework phenomenologically consistent FUTs have been constructed. Here we review FUT models based on the SU(5) and SU(3)^3 gauge groups and their predictions. Of particular interest is the Hig...

  13. Revised predictive equations for salt intrusion modelling in estuaries

    NARCIS (Netherlands)

    Gisen, J.I.A.; Savenije, H.H.G.; Nijzink, R.C.

    2015-01-01

    For one-dimensional salt intrusion models to be predictive, we need predictive equations to link model parameters to observable hydraulic and geometric variables. The one-dimensional model of Savenije (1993b) made use of predictive equations for the Van der Burgh coefficient $K$ and the dispersion

  14. Neutrino nucleosynthesis in supernovae: Shell model predictions

    International Nuclear Information System (INIS)

    Haxton, W.C.

    1989-01-01

    Almost all of the 3 · 10 53 ergs liberated in a core collapse supernova is radiated as neutrinos by the cooling neutron star. I will argue that these neutrinos interact with nuclei in the ejected shells of the supernovae to produce new elements. It appears that this nucleosynthesis mechanism is responsible for the galactic abundances of 7 Li, 11 B, 19 F, 138 La, and 180 Ta, and contributes significantly to the abundances of about 15 other light nuclei. I discuss shell model predictions for the charged and neutral current allowed and first-forbidden responses of the parent nuclei, as well as the spallation processes that produce the new elements. 18 refs., 1 fig., 1 tab

  15. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    units. The approach is inspired by smart-grid electric power production and consumption systems, where the flexibility of a large number of power producing and/or power consuming units can be exploited in a smart-grid solution. The objective is to accommodate the load variation on the grid, arising......This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... on one hand from varying consumption, on the other hand by natural variations in power production e.g. from wind turbines. The approach presented is based on quadratic optimization and possess the properties of low algorithmic complexity and of scalability. In particular, the proposed design methodology...

  16. Critical rotation of general-relativistic polytropic models revisited

    Science.gov (United States)

    Geroyannis, V.; Karageorgopoulos, V.

    2013-09-01

    We develop a perturbation method for computing the critical rotational parameter as a function of the equatorial radius of a rigidly rotating polytropic model in the "post-Newtonia approximation" (PNA). We treat our models as "initial value problems" (IVP) of ordinary differential equations in the complex plane. The computations are carried out by the code dcrkf54.f95 (Geroyannis and Valvi 2012 [P1]; modified Runge-Kutta-Fehlberg code of fourth and fifth order for solving initial value problems in the complex plane). Such a complex-plane treatment removes the syndromes appearing in this particular family of IVPs (see e.g. P1, Sec. 3) and allows continuation of the numerical integrations beyond the surface of the star. Thus all the required values of the Lane-Emden function(s) in the post-Newtonian approximation are calculated by interpolation (so avoiding any extrapolation). An interesting point is that, in our computations, we take into account the complete correction due to the gravitational term, and this issue is a remarkable difference compared to the classical PNA. We solve the generalized density as a function of the equatorial radius and find the critical rotational parameter. Our computations are extended to certain other physical characteristics (like mass, angular momentum, rotational kinetic energy, etc). We find that our method yields results comparable with those of other reliable methods. REFERENCE: V.S. Geroyannis and F.N. Valvi 2012, International Journal of Modern Physics C, 23, No 5, 1250038:1-15.

  17. Exponential critical-state model for magnetization of hard superconductors

    International Nuclear Information System (INIS)

    Chen, D.; Sanchez, A.; Munoz, J.S.

    1990-01-01

    We have calculated the initial magnetization curves and hysteresis loops for hard type-II superconductors based on the exponential-law model, J c (H i ) =k exp(-|H i |/H 0 ), where k and H 0 are constants. After discussing the general behavior of penetrated supercurrents in an infinitely long column specimen, we define a general cross-sectional shape based on two equal circles of radius a, which can be rendered into a circle, a rectangle, or many other shapes. With increasing parameter p (=ka/H 0 ), the computed M-H curves show obvious differences with those computed from Kim's model and approach the results of a simple infinitely narrow square pulse J c (H i ). For high-T c superconductors, our results can be applied to the study of the magnetic properties and the critical-current density of single crystals, as well as to the determination of the intergranular critical-current density from magnetic measurements

  18. A review of critical heat flux prediction technique and its application in CANDU reactor

    International Nuclear Information System (INIS)

    Park, Jee Won; Roh, Gyu Hong

    1997-09-01

    The CHF prediction method being used for CANDU reactor have been critically reviewed. The AECL's CHF prediction totally depends on the look-up table which has been developed from many CHF databank. These databanks include not only the water-cooled bundle-CHF data but also the freon-cooled bundle-CHF data. The CHF look-up tables have been developed by smoothing and interpolating (with some extrapolations) the experimental data to construct a practically useful CHF table. Therefore, the table look-up method has advantages of accuracy, consistency in a wide range of thermal-hydraulic parameters. It seems, however, that since the existing look-up table is constructed by many steps of modification of the original experimental data (e.g., the look-up table is constructed not only using the horizontal flow data but also the vertical flow data), one should be very careful when one try to generate a look-up table for other fuel geometries. In other words, a reliable look-up table can be constructed by performing experiments for new fuel geometry. Finally, it should be noted that the modifications to the original experimental data has simple form with many modification parameters for taking into account of different geometrical effects. This report presents the backbone and the validity of AECL CHF look-up table. (author). 22 refs., 2 tabs., 2 figs

  19. Model predictive control of a wind turbine modelled in Simpack

    International Nuclear Information System (INIS)

    Jassmann, U; Matzke, D; Reiter, M; Abel, D; Berroth, J; Schelenz, R; Jacobs, G

    2014-01-01

    Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine

  20. Model predictive control of a wind turbine modelled in Simpack

    Science.gov (United States)

    Jassmann, U.; Berroth, J.; Matzke, D.; Schelenz, R.; Reiter, M.; Jacobs, G.; Abel, D.

    2014-06-01

    Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine to

  1. Self-Organized Criticality Theory Model of Thermal Sandpile

    International Nuclear Information System (INIS)

    Peng Xiao-Dong; Qu Hong-Peng; Xu Jian-Qiang; Han Zui-Jiao

    2015-01-01

    A self-organized criticality model of a thermal sandpile is formulated for the first time to simulate the dynamic process with interaction between avalanche events on the fast time scale and diffusive transports on the slow time scale. The main characteristics of the model are that both particle and energy avalanches of sand grains are considered simultaneously. Properties of intermittent transport and improved confinement are analyzed in detail. The results imply that the intermittent phenomenon such as blobs in the low confinement mode as well as edge localized modes in the high confinement mode observed in tokamak experiments are not only determined by the edge plasma physics, but also affected by the core plasma dynamics. (paper)

  2. Model of critical diagnostic reasoning: achieving expert clinician performance.

    Science.gov (United States)

    Harjai, Prashant Kumar; Tiwari, Ruby

    2009-01-01

    Diagnostic reasoning refers to the analytical processes used to determine patient health problems. While the education curriculum and health care system focus on training nurse clinicians to accurately recognize and rescue clinical situations, assessments of non-expert nurses have yielded less than satisfactory data on diagnostic competency. The contrast between the expert and non-expert nurse clinician raises the important question of how differences in thinking may contribute to a large divergence in accurate diagnostic reasoning. This article recognizes superior organization of one's knowledge base, using prototypes, and quick retrieval of pertinent information, using similarity recognition as two reasons for the expert's superior diagnostic performance. A model of critical diagnostic reasoning, using prototypes and similarity recognition, is proposed and elucidated using case studies. This model serves as a starting point toward bridging the gap between clinical data and accurate problem identification, verification, and management while providing a structure for a knowledge exchange between expert and non-expert clinicians.

  3. Modeling financial markets by self-organized criticality

    Science.gov (United States)

    Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea

    2015-10-01

    We present a financial market model, characterized by self-organized criticality, that is able to generate endogenously a realistic price dynamics and to reproduce well-known stylized facts. We consider a community of heterogeneous traders, composed by chartists and fundamentalists, and focus on the role of informative pressure on market participants, showing how the spreading of information, based on a realistic imitative behavior, drives contagion and causes market fragility. In this model imitation is not intended as a change in the agent's group of origin, but is referred only to the price formation process. We introduce in the community also a variable number of random traders in order to study their possible beneficial role in stabilizing the market, as found in other studies. Finally, we also suggest some counterintuitive policy strategies able to dampen fluctuations by means of a partial reduction of information.

  4. A CRITICAL REVIEW OF THE MODEL MINORITY STEREOTYPE SHIBBOLETH

    Directory of Open Access Journals (Sweden)

    Nicholas Daniel Hartlep

    2012-04-01

    Full Text Available The author conducted a thematic review of the literature on the model minority stereotype  (MMS.  MMS  writings  (n  =  246  included  peer-reviewed  and  non-peer-reviewed materials spanning from the 1960s to present. Writings were reviewed if their title included “model minority.”  The purpose  was to review the MMS critically.  Six major themes were found to recurrently appear in the MMS literature. Those themes were the following: (1 critiquing colorblindness, (2 countering meritocracy, (3 demystifying Asian  American  exceptionalism,  (4  uncovering  divide  and  conquer  stratagem,  (5 problematizing Asian American homogenization, and (6 unmasking the “yellow peril” stereotype. Implications for the education of Asian students in America and abroad are shared.

  5. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  6. Same admissions tools, different outcomes: a critical perspective on predictive validity in three undergraduate medical schools.

    Science.gov (United States)

    Edwards, Daniel; Friedman, Tim; Pearce, Jacob

    2013-12-27

    Admission to medical school is one of the most highly competitive entry points in higher education. Considerable investment is made by universities to develop selection processes that aim to identify the most appropriate candidates for their medical programs. This paper explores data from three undergraduate medical schools to offer a critical perspective of predictive validity in medical admissions. This study examined 650 undergraduate medical students from three Australian universities as they progressed through the initial years of medical school (accounting for approximately 25 per cent of all commencing undergraduate medical students in Australia in 2006 and 2007). Admissions criteria (aptitude test score based on UMAT, school result and interview score) were correlated with GPA over four years of study. Standard regression of each of the three admissions variables on GPA, for each institution at each year level was also conducted. Overall, the data found positive correlations between performance in medical school, school achievement and UMAT, but not interview. However, there were substantial differences between schools, across year levels, and within sections of UMAT exposed. Despite this, each admission variable was shown to add towards explaining course performance, net of other variables. The findings suggest the strength of multiple admissions tools in predicting outcomes of medical students. However, they also highlight the large differences in outcomes achieved by different schools, thus emphasising the pitfalls of generalising results from predictive validity studies without recognising the diverse ways in which they are designed and the variation in the institutional contexts in which they are administered. The assumption that high-positive correlations are desirable (or even expected) in these studies is also problematised.

  7. A Fisher’s Criterion-Based Linear Discriminant Analysis for Predicting the Critical Values of Coal and Gas Outbursts Using the Initial Gas Flow in a Borehole

    Directory of Open Access Journals (Sweden)

    Xiaowei Li

    2017-01-01

    Full Text Available The risk of coal and gas outbursts can be predicted using a method that is linear and continuous and based on the initial gas flow in the borehole (IGFB; this method is significantly superior to the traditional point prediction method. Acquiring accurate critical values is the key to ensuring accurate predictions. Based on ideal rock cross-cut coal uncovering model, the IGFB measurement device was developed. The present study measured the data of the initial gas flow over 3 min in a 1 m long borehole with a diameter of 42 mm in the laboratory. A total of 48 sets of data were obtained. These data were fuzzy and chaotic. Fisher’s discrimination method was able to transform these spatial data, which were multidimensional due to the factors influencing the IGFB, into a one-dimensional function and determine its critical value. Then, by processing the data into a normal distribution, the critical values of the outbursts were analyzed using linear discriminant analysis with Fisher’s criterion. The weak and strong outbursts had critical values of 36.63 L and 80.85 L, respectively, and the accuracy of the back-discriminant analysis for the weak and strong outbursts was 94.74% and 92.86%, respectively. Eight outburst tests were simulated in the laboratory, the reverse verification accuracy was 100%, and the accuracy of the critical value was verified.

  8. Predictive integrated modelling for ITER scenarios

    International Nuclear Information System (INIS)

    Artaud, J.F.; Imbeaux, F.; Aniel, T.; Basiuk, V.; Eriksson, L.G.; Giruzzi, G.; Hoang, G.T.; Huysmans, G.; Joffrin, E.; Peysson, Y.; Schneider, M.; Thomas, P.

    2005-01-01

    The uncertainty on the prediction of ITER scenarios is evaluated. 2 transport models which have been extensively validated against the multi-machine database are used for the computation of the transport coefficients. The first model is GLF23, the second called Kiauto is a model in which the profile of dilution coefficient is a gyro Bohm-like analytical function, renormalized in order to get profiles consistent with a given global energy confinement scaling. The package of codes CRONOS is used, it gives access to the dynamics of the discharge and allows the study of interplay between heat transport, current diffusion and sources. The main motivation of this work is to study the influence of parameters such plasma current, heat, density, impurities and toroidal moment transport. We can draw the following conclusions: 1) the target Q = 10 can be obtained in ITER hybrid scenario at I p = 13 MA, using either the DS03 two terms scaling or the GLF23 model based on the same pedestal; 2) I p = 11.3 MA, Q = 10 can be reached only assuming a very peaked pressure profile and a low pedestal; 3) at fixed Greenwald fraction, Q increases with density peaking; 4) achieving a stationary q-profile with q > 1 requires a large non-inductive current fraction (80%) that could be provided by 20 to 40 MW of LHCD; and 5) owing to the high temperature the q-profile penetration is delayed and q = 1 is reached about 600 s in ITER hybrid scenario at I p = 13 MA, in the absence of active q-profile control. (A.C.)

  9. Nuclear criticality safety: general. 3. Tokaimura Criticality Accident: Point Model Stochastic Neutronic Interpretation

    International Nuclear Information System (INIS)

    Mechitoua, Boukhmes

    2001-01-01

    step is based on the knowledge of the reactivity insertion. 2. Initiation probability for one neutron P(t). 3. Initiation probability with the neutron source P S (t). Japanese specialists told us that the accident happened during the seventh batch pouring. They estimated the k eff before and at the end of this operation: After the sixth batch, K=0.981, and at the end of the seventh batch, K=1.030. When the accident happened (neutron burst), 3 $ was inserted in 15 s, so if we suppose a linear insertion, we have a slope equal to 20 c/s. We may write K(t) = 1 + wt with w = 0.2 β = 0.00160/s. During the accident, there was between 14 and 16 kg of uranium with an enrichment of 18.8%. We have calculated P S (t) and we have taken into account six internal source levels: 1. spontaneous fission: 150 to 170 to 200 n/s; 2. (α, n) reactions and others of this type, and amplification of the internal source during the delayed critical phase: 500 to 1000 to 2000 n/s. In Fig. 2, we can see that the initiation occurred almost surely before 7 s and with a probability close to 0.46 before 2 s with a source of 200 n/s. With a source of 2000 n/s, we have higher initiation probabilities; for example, the initiation occurred almost surely before 2 s and with a probability close to 0.77 before 1 s after the critical time. These results are interesting because they show that a supercritical system does not lead immediately to initiation. One may have short supercritical excursion with no neutron production. The point model approach is useful for gaining a good understanding of what can be the stochastic neutronic contribution for the interpretation of criticality accidents. The results described in this paper may be useful for the interpretation of the time delay between the critical state time and the neutron burst. The thought process we have described may be used in the 'real world', that is, with multigroup or continuous-energy simulations

  10. Comparison of the CATHENA model of Gentilly-2 end shield cooling system predictions to station data

    Energy Technology Data Exchange (ETDEWEB)

    Zagre, G.; Sabourin, G. [Candu Energy Inc., Montreal, Quebec (Canada); Chapados, S. [Hydro-Quebec, Montreal, Quebec (Canada)

    2012-07-01

    As part of the Gentilly-2 Refurbishment Project, Hydro-Quebec has elected to perform the End Shield Cooling Safety Analysis. A CATHENA model of Gentilly-2 End Shield Cooling System was developed for this purpose. This model includes new elements compared to other CANDU6 End Shield Cooling models such as a detailed heat exchanger and control logic model. In order to test the model robustness and accuracy, the model predictions were compared with plant measurements.This paper summarizes this comparison between the model predictions and the station measurements. It is shown that the CATHENA model is flexible and accurate enough to predict station measurements for critical parameters, and the detailed heat exchanger model allows reproducing station transients. (author)

  11. Model Predictive Control for Load Frequency Control with Wind Turbines

    Directory of Open Access Journals (Sweden)

    Yi Zhang

    2015-01-01

    Full Text Available Reliable load frequency (LFC control is crucial to the operation and design of modern electric power systems. Considering the LFC problem of a four-area interconnected power system with wind turbines, this paper presents a distributed model predictive control (DMPC based on coordination scheme. The proposed algorithm solves a series of local optimization problems to minimize a performance objective for each control area. The scheme incorporates the two critical nonlinear constraints, for example, the generation rate constraint (GRC and the valve limit, into convex optimization problems. Furthermore, the algorithm reduces the impact on the randomness and intermittence of wind turbine effectively. A performance comparison between the proposed controller with and that without the participation of the wind turbines is carried out. Good performance is obtained in the presence of power system nonlinearities due to the governors and turbines constraints and load change disturbances.

  12. Net Reclassification Indices for Evaluating Risk-Prediction Instruments: A Critical Review

    Science.gov (United States)

    Kerr, Kathleen F.; Wang, Zheyu; Janes, Holly; McClelland, Robyn L.; Psaty, Bruce M.; Pepe, Margaret S.

    2014-01-01

    Net reclassification indices have recently become popular statistics for measuring the prediction increment of new biomarkers. We review the various types of net reclassification indices and their correct interpretations. We evaluate the advantages and disadvantages of quantifying the prediction increment with these indices. For pre-defined risk categories, we relate net reclassification indices to existing measures of the prediction increment. We also consider statistical methodology for constructing confidence intervals for net reclassification indices and evaluate the merits of hypothesis testing based on such indices. We recommend that investigators using net reclassification indices should report them separately for events (cases) and nonevents (controls). When there are two risk categories, the components of net reclassification indices are the same as the changes in the true-positive and false-positive rates. We advocate use of true- and false-positive rates and suggest it is more useful for investigators to retain the existing, descriptive terms. When there are three or more risk categories, we recommend against net reclassification indices because they do not adequately account for clinically important differences in shifts among risk categories. The category-free net reclassification index is a new descriptive device designed to avoid pre-defined risk categories. However, it suffers from many of the same problems as other measures such as the area under the receiver operating characteristic curve. In addition, the category-free index can mislead investigators by overstating the incremental value of a biomarker, even in independent validation data. When investigators want to test a null hypothesis of no prediction increment, the well-established tests for coefficients in the regression model are superior to the net reclassification index. If investigators want to use net reclassification indices, confidence intervals should be calculated using bootstrap

  13. The Critical Point Entanglement and Chaos in the Dicke Model

    Directory of Open Access Journals (Sweden)

    Lina Bao

    2015-07-01

    Full Text Available Ground state properties and level statistics of the Dicke model for a finite number of atoms are investigated based on a progressive diagonalization scheme (PDS. Particle number statistics, the entanglement measure and the Shannon information entropy at the resonance point in cases with a finite number of atoms as functions of the coupling parameter are calculated. It is shown that the entanglement measure defined in terms of the normalized von Neumann entropy of the reduced density matrix of the atoms reaches its maximum value at the critical point of the quantum phase transition where the system is most chaotic. Noticeable change in the Shannon information entropy near or at the critical point of the quantum phase transition is also observed. In addition, the quantum phase transition may be observed not only in the ground state mean photon number and the ground state atomic inversion as shown previously, but also in fluctuations of these two quantities in the ground state, especially in the atomic inversion fluctuation.

  14. A review of logistic regression models used to predict post-fire tree mortality of western North American conifers

    Science.gov (United States)

    Travis Woolley; David C. Shaw; Lisa M. Ganio; Stephen. Fitzgerald

    2012-01-01

    Logistic regression models used to predict tree mortality are critical to post-fire management, planning prescribed bums and understanding disturbance ecology. We review literature concerning post-fire mortality prediction using logistic regression models for coniferous tree species in the western USA. We include synthesis and review of: methods to develop, evaluate...

  15. Uncertainties in modelling and scaling of critical flows and pump model in TRAC-PF1/MOD1

    International Nuclear Information System (INIS)

    Rohatgi, U.S.; Yu, Wen-Shi.

    1987-01-01

    The USNRC has established a Code Scalability, Applicability and Uncertainty (CSAU) evaluation methodology to quantify the uncertainty in the prediction of safety parameters by the best estimate codes. These codes can then be applied to evaluate the Emergency Core Cooling System (ECCS). The TRAC-PF1/MOD1 version was selected as the first code to undergo the CSAU analysis for LBLOCA applications. It was established through this methodology that break flow and pump models are among the top ranked models in the code affecting the peak clad temperature (PCT) prediction for LBLOCA. The break flow model bias or discrepancy and the uncertainty were determined by modelling the test section near the break for 12 Marviken tests. It was observed that the TRAC-PF1/MOD1 code consistently underpredicts the break flow rate and that the prediction improved with increasing pipe length (larger L/D). This is true for both subcooled and two-phase critical flows. A pump model was developed from Westinghouse (1/3 scale) data. The data represent the largest available test pump relevant to Westinghouse PWRs. It was then shown through the analysis of CE and CREARE pump data that larger pumps degrade less and also that pumps degrade less at higher pressures. Since the model developed here is based on the 1/3 scale pump and on low pressure data, it is conservative and will overpredict the degradation when applied to PWRs

  16. Burnout and posttraumatic stress in paediatric critical care personnel: Prediction from resilience and coping styles.

    Science.gov (United States)

    Rodríguez-Rey, Rocío; Palacios, Alba; Alonso-Tapia, Jesús; Pérez, Elena; Álvarez, Elena; Coca, Ana; Mencía, Santiago; Marcos, Ana; Mayordomo-Colunga, Juan; Fernández, Francisco; Gómez, Fernando; Cruz, Jaime; Ordóñez, Olga; Llorente, Ana

    2018-03-28

    Our aims were (1) to explore the prevalence of burnout syndrome (BOS) and posttraumatic stress disorder (PTSD) in a sample of Spanish staff working in the paediatric intensive care unit (PICU) and compare these rates with a sample of general paediatric staff and (2) to explore how resilience, coping strategies, and professional and demographic variables influence BOS and PTSD. This is a multicentre, cross-sectional study. Data were collected in the PICU and in other paediatric wards of nine hospitals. Participants consisted of 298 PICU staff members (57 physicians, 177 nurses, and 64 nursing assistants) and 189 professionals working in non-critical paediatric units (53 physicians, 104 nurses, and 32 nursing assistants). They completed the Brief Resilience Scale, the Coping Strategies Questionnaire for healthcare providers, the Maslach Burnout Inventory, and the Trauma Screening Questionnaire. Fifty-six percent of PICU working staff reported burnout in at least one dimension (36.20% scored over the cut-off for emotional exhaustion, 27.20% for depersonalisation, and 20.10% for low personal accomplishment), and 20.1% reported PTSD. There were no differences in burnout and PTSD scores between PICU and non-PICU staff members, either among physicians, nurses, or nursing assistants. Higher burnout and PTSD rates emerged after the death of a child and/or conflicts with patients/families or colleagues. Around 30% of the variance in BOS and PTSD is predicted by a frequent usage of the emotion-focused coping style and an infrequent usage of the problem-focused coping style. Interventions to prevent and treat distress among paediatric staff members are needed and should be focused on: (i) promoting active emotional processing of traumatic events and encouraging positive thinking; (ii) developing a sense of detached concern; (iii) improving the ability to solve interpersonal conflicts, and (iv) providing adequate training in end-of-life care. Copyright © 2018 Australian

  17. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  18. A porous flow model for the geometrical form of volcanoes - Critical comments

    Science.gov (United States)

    Wadge, G.; Francis, P.

    1982-01-01

    A critical evaluation is presented of the assumptions on which the mathematical model for the geometrical form of a volcano arising from the flow of magma in a porous medium of Lacey et al. (1981) is based. The lack of evidence for an equipotential surface or its equivalent in volcanoes prior to eruption is pointed out, and the preference of volcanic eruptions for low ground is attributed to the local stress field produced by topographic loading rather than a rising magma table. Other difficulties with the model involve the neglect of the surface flow of lava under gravity away from the vent, and the use of the Dupuit approximation for unconfined flow and the assumption of essentially horizontal magma flow. Comparisons of model predictions with the shapes of actual volcanoes reveal the model not to fit lava shield volcanoes, for which the cone represents the solidification of small lava flows, and to provide a poor fit to composite central volcanoes.

  19. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  20. Critical, statistical, and thermodynamical properties of lattice models

    Energy Technology Data Exchange (ETDEWEB)

    Varma, Vipin Kerala

    2013-10-15

    In this thesis we investigate zero temperature and low temperature properties - critical, statistical and thermodynamical - of lattice models in the contexts of bosonic cold atom systems, magnetic materials, and non-interacting particles on various lattice geometries. We study quantum phase transitions in the Bose-Hubbard model with higher body interactions, as relevant for optical lattice experiments of strongly interacting bosons, in one and two dimensions; the universality of the Mott insulator to superfluid transition is found to remain unchanged for even large three body interaction strengths. A systematic renormalization procedure is formulated to fully re-sum these higher (three and four) body interactions into the two body terms. In the strongly repulsive limit, we analyse the zero and low temperature physics of interacting hard-core bosons on the kagome lattice at various fillings. Evidence for a disordered phase in the Ising limit of the model is presented; in the strong coupling limit, the transition between the valence bond solid and the superfluid is argued to be first order at the tip of the solid lobe.

  1. Critical, statistical, and thermodynamical properties of lattice models

    International Nuclear Information System (INIS)

    Varma, Vipin Kerala

    2013-10-01

    In this thesis we investigate zero temperature and low temperature properties - critical, statistical and thermodynamical - of lattice models in the contexts of bosonic cold atom systems, magnetic materials, and non-interacting particles on various lattice geometries. We study quantum phase transitions in the Bose-Hubbard model with higher body interactions, as relevant for optical lattice experiments of strongly interacting bosons, in one and two dimensions; the universality of the Mott insulator to superfluid transition is found to remain unchanged for even large three body interaction strengths. A systematic renormalization procedure is formulated to fully re-sum these higher (three and four) body interactions into the two body terms. In the strongly repulsive limit, we analyse the zero and low temperature physics of interacting hard-core bosons on the kagome lattice at various fillings. Evidence for a disordered phase in the Ising limit of the model is presented; in the strong coupling limit, the transition between the valence bond solid and the superfluid is argued to be first order at the tip of the solid lobe.

  2. Pressure and Stress Prediction in the Nankai Accretionary Prism: A Critical State Soil Mechanics Porosity-Based Approach

    Science.gov (United States)

    Flemings, Peter B.; Saffer, Demian M.

    2018-02-01

    We predict pressure and stress from porosity in the Nankai accretionary prism with a critical state soil model that describes porosity as a function of mean stress and maximum shear stress, and assumes Coulomb failure within the wedge and uniaxial burial beneath it. At Ocean Drilling Program Sites 1174 and 808, we find that pore pressure in the prism supports 70% to 90% of the overburden (λu = 0.7 to 0.9), for a range of assumed friction angles (5-30°). The prism pore pressure is equal to or greater than that in the underthrust sediments even though the porosity is lower within the prism. The high pore pressures lead to a mechanically weak wedge that supports low maximum shear stress, and this in turn requires very low basal traction to remain consistent with the observed narrowly tapered wedge geometry. We estimate the décollement friction coefficient (μb) to be 0.08-0.38 (ϕb' = 4.6°-21°). Our approach defines a pathway to predict pressure in a wide range of environments from readily observed quantities (e.g., porosity and seismic velocity). Pressure and stress control the form of the Earth's collisional continental margins and play a key role in its greatest earthquakes. However, heretofore, there has been no systematic approach to relate material state (e.g., porosity), pore pressure, and stress in these systems.

  3. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  4. Spherical and cylindrical cavity expansion models based prediction of penetration depths of concrete targets.

    Directory of Open Access Journals (Sweden)

    Xiaochao Jin

    Full Text Available The cavity expansion theory is most widely used to predict the depth of penetration of concrete targets. The main purpose of this work is to clarify the differences between the spherical and cylindrical cavity expansion models and their scope of application in predicting the penetration depths of concrete targets. The factors that influence the dynamic cavity expansion process of concrete materials were first examined. Based on numerical results, the relationship between expansion pressure and velocity was established. Then the parameters in the Forrestal's formula were fitted to have a convenient and effective prediction of the penetration depth. Results showed that both the spherical and cylindrical cavity expansion models can accurately predict the depth of penetration when the initial velocity is lower than 800 m/s. However, the prediction accuracy decreases with the increasing of the initial velocity and diameters of the projectiles. Based on our results, it can be concluded that when the initial velocity is higher than the critical velocity, the cylindrical cavity expansion model performs better than the spherical cavity expansion model in predicting the penetration depth, while when the initial velocity is lower than the critical velocity the conclusion is quite the contrary. This work provides a basic principle for selecting the spherical or cylindrical cavity expansion model to predict the penetration depth of concrete targets.

  5. Temperature modelling and prediction for activated sludge systems.

    Science.gov (United States)

    Lippi, S; Rosso, D; Lubello, C; Canziani, R; Stenstrom, M K

    2009-01-01

    Temperature is an important factor affecting biomass activity, which is critical to maintain efficient biological wastewater treatment, and also physiochemical properties of mixed liquor as dissolved oxygen saturation and settling velocity. Controlling temperature is not normally possible for treatment systems but incorporating factors impacting temperature in the design process, such as aeration system, surface to volume ratio, and tank geometry can reduce the range of temperature extremes and improve the overall process performance. Determining how much these design or up-grade options affect the tank temperature requires a temperature model that can be used with existing design methodologies. This paper presents a new steady state temperature model developed by incorporating the best aspects of previously published models, introducing new functions for selected heat exchange paths and improving the method for predicting the effects of covering aeration tanks. Numerical improvements with embedded reference data provide simpler formulation, faster execution, easier sensitivity analyses, using an ordinary spreadsheet. The paper presents several cases to validate the model.

  6. A Multiscale Modeling System: Developments, Applications, and Critical Issues

    Science.gov (United States)

    Tao, Wei-Kuo; Lau, William; Simpson, Joanne; Chern, Jiun-Dar; Atlas, Robert; Khairoutdinov, David Randall Marat; Li, Jui-Lin; Waliser, Duane E.; Jiang, Jonathan; Hou, Arthur; hide

    2009-01-01

    The foremost challenge in parameterizing convective clouds and cloud systems in large-scale models are the many coupled dynamical and physical processes that interact over a wide range of scales, from microphysical scales to the synoptic and planetary scales. This makes the comprehension and representation of convective clouds and cloud systems one of the most complex scientific problems in Earth science. During the past decade, the Global Energy and Water Cycle Experiment (GEWEX) Cloud System Study (GCSS) has pioneered the use of single-column models (SCMs) and cloud-resolving models (CRMs) for the evaluation of the cloud and radiation parameterizations in general circulation models (GCMs; e.g., GEWEX Cloud System Science Team 1993). These activities have uncovered many systematic biases in the radiation, cloud and convection parameterizations of GCMs and have led to the development of new schemes (e.g., Zhang 2002; Pincus et al, 2003; Zhang and Wu 2003; Wu et al. 2003; Liang and Wu 2005; Wu and Liang 2005, and others). Comparisons between SCMs and CRMs using the same large-scale forcing derived from field campaigns have demonstrated that CRMs are superior to SCMs in the prediction of temperature and moisture tendencies (e.g., Das et al. 1999; Randall et al 2003b; Xie et al. 2005).

  7. Predictive Modelling of Heavy Metals in Urban Lakes

    OpenAIRE

    Lindström, Martin

    2000-01-01

    Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes. Sediment and water data for this study were collected from ten small lakes in the ...

  8. A critical discussion on the applicability of Compound Topographic Index (CTI) for predicting ephemeral gully erosion

    Science.gov (United States)

    Casalí, Javier; Chahor, Youssef; Giménez, Rafael; Campo-Bescós, Miguel

    2016-04-01

    The so-called Compound Topographic Index (CTI) can be calculated for each grid cell in a DEM and be used to identify potential locations for ephemeral gullies (e. g.) based on land topography (CTI = A.S.PLANC, where A is upstream drainage area, S is local slope and PLANC is planform curvature, a measure of the landscape convergence) (Parker et al., 2007). It can be shown that CTI represents stream power per unit bed area and it considers the major parameters controlling the pattern and intensity of concentrated surface runoff in the field (Parker et al., 2007). However, other key variables controlling e.g. erosion (e. g. e.) such as soil characteristics, land-use and management, are not had into consideration. The critical CTI value (CTIc) "represents the intensity of concentrated overland flow necessary to initiate erosion and channelised flow under a given set of circumstances" (Parker et al., 2007). AnnAGNPS (Annualized Agriculture Non-Point Source) pollution model is an important management tool developed by (USDA) and uses CTI to locate potential ephemeral gullies. Then, and depending on rainfall characteristics of the period simulated by AnnAGNPS, potential e. g. can become "actual", and be simulated by the model accordingly. This paper presents preliminary results and a number of considerations after evaluating the CTI tool in Navarre. CTIc values found are similar to those cited by other authors, and the e. g. networks that on average occur in the area have been located reasonably well. After our experience we believe that it is necessary to distinguish between the CTIc corresponding to the location of headcuts whose migrations originate the e. g. (CTIc1); and the CTIc necessary to represent the location of the gully networks in the watershed (CTIc2), where gully headcuts are located in the upstream end of the gullies. Most scientists only consider one CTIc value, although, from our point of view, the two situations are different. CTIc1 would represent the

  9. Critical Thinking Training for Army Officers. Volume 2: A Model of Critical Thinking

    Science.gov (United States)

    2009-02-01

    construing CT as applied logic and reasoning, by far the most common theme emerging from the literature, has come under criticism from “ feminists , critical...discrimination and social justice, and also participated in anti-nuclear demonstrations. 1. Linda is a bank teller and is active in the feminist ...Daycare and Aggression 10 27 27 Girls’ Sports Program Effectiveness 6 20 20 Hormone Replacement Therapy 11 22 22 Leading Questions and Memory 12

  10. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  11. Critical-Inquiry-Based-Learning: Model of Learning to Promote Critical Thinking Ability of Pre-service Teachers

    Science.gov (United States)

    Prayogi, S.; Yuanita, L.; Wasis

    2018-01-01

    This study aimed to develop Critical-Inquiry-Based-Learning (CIBL) learning model to promote critical thinking (CT) ability of preservice teachers. The CIBL learning model was developed by meeting the criteria of validity, practicality, and effectiveness. Validation of the model involves 4 expert validators through the mechanism of the focus group discussion (FGD). CIBL learning model declared valid to promote CT ability, with the validity level (Va) of 4.20 and reliability (r) of 90,1% (very reliable). The practicality of the model was evaluated when it was implemented that involving 17 of preservice teachers. The CIBL learning model had been declared practice, its measuring from learning feasibility (LF) with very good criteria (LF-score = 4.75). The effectiveness of the model was evaluated from the improvement CT ability after the implementation of the model. CT ability were evaluated using the scoring technique adapted from Ennis-Weir Critical Thinking Essay Test. The average score of CT ability on pretest is - 1.53 (uncritical criteria), whereas on posttest is 8.76 (critical criteria), with N-gain score of 0.76 (high criteria). Based on the results of this study, it can be concluded that developed CIBL learning model is feasible to promote CT ability of preservice teachers.

  12. Integrated Modeling for Road Condition Prediction

    Science.gov (United States)

    2017-12-31

    Transportation Systems Management and Operations (TSMO) is at a critical point in its development due to an explosion in data availability and analytics. Intelligent transportation systems (ITS) gathering data about weather and traffic conditions cou...

  13. Validation of the LOD score compared with APACHE II score in prediction of the hospital outcome in critically ill patients.

    Science.gov (United States)

    Khwannimit, Bodin

    2008-01-01

    The Logistic Organ Dysfunction score (LOD) is an organ dysfunction score that can predict hospital mortality. The aim of this study was to validate the performance of the LOD score compared with the Acute Physiology and Chronic Health Evaluation II (APACHE II) score in a mixed intensive care unit (ICU) at a tertiary referral university hospital in Thailand. The data were collected prospectively on consecutive ICU admissions over a 24 month period from July1, 2004 until June 30, 2006. Discrimination was evaluated by the area under the receiver operating characteristic curve (AUROC). The calibration was assessed by the Hosmer-Lemeshow goodness-of-fit H statistic. The overall fit of the model was evaluated by the Brier's score. Overall, 1,429 patients were enrolled during the study period. The mortality in the ICU was 20.9% and in the hospital was 27.9%. The median ICU and hospital lengths of stay were 3 and 18 days, respectively, for all patients. Both models showed excellent discrimination. The AUROC for the LOD and APACHE II were 0.860 [95% confidence interval (CI) = 0.838-0.882] and 0.898 (95% Cl = 0.879-0.917), respectively. The LOD score had perfect calibration with the Hosmer-Lemeshow goodness-of-fit H chi-2 = 10 (p = 0.44). However, the APACHE II had poor calibration with the Hosmer-Lemeshow goodness-of-fit H chi-2 = 75.69 (p < 0.001). Brier's score showed the overall fit for both models were 0.123 (95%Cl = 0.107-0.141) and 0.114 (0.098-0.132) for the LOD and APACHE II, respectively. Thus, the LOD score was found to be accurate for predicting hospital mortality for general critically ill patients in Thailand.

  14. Risk-Based Predictive Maintenance for Safety-Critical Systems by Using Probabilistic Inference

    Directory of Open Access Journals (Sweden)

    Tianhua Xu

    2013-01-01

    Full Text Available Risk-based maintenance (RBM aims to improve maintenance planning and decision making by reducing the probability and consequences of failure of equipment. A new predictive maintenance strategy that integrates dynamic evolution model and risk assessment is proposed which can be used to calculate the optimal maintenance time with minimal cost and safety constraints. The dynamic evolution model provides qualified risks by using probabilistic inference with bucket elimination and gives the prospective degradation trend of a complex system. Based on the degradation trend, an optimal maintenance time can be determined by minimizing the expected maintenance cost per time unit. The effectiveness of the proposed method is validated and demonstrated by a collision accident of high-speed trains with obstacles in the presence of safety and cost constrains.

  15. Prediction of critical heat flux in narrow rectangular channels using an artificial neural network

    International Nuclear Information System (INIS)

    Zhou Lei; Yan Xiao; Huang Yanping; Xiao Zejun; Yu Jiyang

    2011-01-01

    The concept of Critical heat flux (CHF) and its importance are introduced and the meaning to research CHF in narrow rectangular channels independently is emphasized. This paper is the first effort to predict CHF in NRCs using aritificial neural network. The mathematical structure of the artificial neural network and the error back-propagation algorithm are introduced. To predict CHF, the four dimensionless groups are inputted to the neural network and the output is the dimensionless CHF. As the hidden nodes increased, the training error decreases while the testing error decreases firstly and then transition occurs. Based on this, the hidden nodes are set as 5 and the trained network predicts all of the training and testing data points with RMS=0.0016 and μ=1.0003, which is better than several well-known existing correlations. Based on the trained network, the effect of several parameters on CHF are simulated and discussed. CHF increases almost linearly as the inlet subcooling increases. And larger mass flux enhances the effect of the inlet subcooling. CHF increases with the mass flux increasing. And the effect seems to be a little stronger for relatively low system pressure. CHF decreases almost linearly as the system pressure increases for the fixed inlet condition. The slope of the curve also increases with higher mass flux. This observation is limited to the ranges of the experimental database. CHF decreases as the heated length is increased and the gradients of the curves become very sharp for relatively short channel. CHF increases slightly with the diameter increasing with the variance of the gap limited within 1 to 3 mm. For relatively low mass flux, the effect of the equivalent diameter on CHF is insignificant. As the width of the channel is large enough, the effect of the gap is quite the same as that of the equivalent diameter. A BPNN is successfully trained based on near 500 CHF data points in NRCs, which has much better performances than the

  16. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  17. MODELLING OF DYNAMIC SPEED LIMITS USING THE MODEL PREDICTIVE CONTROL

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available The article considers the issues of traffic management using intelligent system “Car-Road” (IVHS, which consist of interacting intelligent vehicles (IV and intelligent roadside controllers. Vehicles are organized in convoy with small distances between them. All vehicles are assumed to be fully automated (throttle control, braking, steering. Proposed approaches for determining speed limits for traffic cars on the motorway using a model predictive control (MPC. The article proposes an approach to dynamic speed limit to minimize the downtime of vehicles in traffic.

  18. Assessment of critical flow models of RELAP5-MOD2 and CATHARE codes

    International Nuclear Information System (INIS)

    Hao Laomi; Zhu Zhanchuan

    1992-01-01

    The critical flow tests for the long and short nozzles conducted on the SUPER MOBY-DICK facility were analyzed using the RELAP5-MOD2 and CATHARE 1.3 codes to assess the critical flow models of two codes. The critical mass flux calculated for two nozzles are given. The CATHARE code has used the thermodynamic nonequilibrium sound velocity of the two-phase fluid as the critical flow criterion, and has the better interphase transfer models and calculates the critical flow velocities with the completely implicit solution. Therefore, it can well calculate the critical flowrate and can describe the effect of the geometry L/D on the critical flowrate

  19. CRITICAL DIFFERENCES OF ASYMMETRIC MAGNETIC RECONNECTION FROM STANDARD MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Nitta, S. [Hinode Science Project, National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo, 181-8588 (Japan); Wada, T. [Tsukuba University of Technology, 4-3-15 Amakubo, Tsukuba, 305-8520 (Japan); Fuchida, T. [Graduate School of Science and Engineering, Ehime Univesity, 2-5 Bunkyo-cho, Matuyama, Ehime, 790-8577 (Japan); Kondoh, K., E-mail: nittasn@yahoo.co.jp, E-mail: tomohide.wada@gmail.com, E-mail: fuchida@sp.cosmos.ehime-u.ac.jp, E-mail: kondo@cosmos.ehime-u.ac.jp [Research Center for Space and Cosmic Evolution, Ehime University, 2-5 Bunkyo-cho, Matsuyama, Ehime, 790-8577 (Japan)

    2016-09-01

    We have clarified the structure of asymmetric magnetic reconnection in detail as the result of the spontaneous evolutionary process. The asymmetry is imposed as ratio k of the magnetic field strength in both sides of the initial current sheet (CS) in the isothermal equilibrium. The MHD simulation is carried out by the HLLD code for the long-term temporal evolution with very high spatial resolution. The resultant structure is drastically different from the symmetric case (e.g., the Petschek model) even for slight asymmetry k = 2. (1) The velocity distribution in the reconnection jet clearly shows a two-layered structure, i.e., the high-speed sub-layer in which the flow is almost field aligned and the acceleration sub-layer. (2) Higher beta side (HBS) plasma is caught in a lower beta side plasmoid. This suggests a new plasma mixing process in the reconnection events. (3) A new large strong fast shock in front of the plasmoid forms in the HBS. This can be a new particle acceleration site in the reconnection system. These critical properties that have not been reported in previous works suggest that we contribute to a better and more detailed knowledge of the reconnection of the standard model for the symmetric magnetic reconnection system.

  20. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    Science.gov (United States)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  1. A Personalized Predictive Framework for Multivariate Clinical Time Series via Adaptive Model Selection.

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2017-11-01

    Building of an accurate predictive model of clinical time series for a patient is critical for understanding of the patient condition, its dynamics, and optimal patient management. Unfortunately, this process is not straightforward. First, patient-specific variations are typically large and population-based models derived or learned from many different patients are often unable to support accurate predictions for each individual patient. Moreover, time series observed for one patient at any point in time may be too short and insufficient to learn a high-quality patient-specific model just from the patient's own data. To address these problems we propose, develop and experiment with a new adaptive forecasting framework for building multivariate clinical time series models for a patient and for supporting patient-specific predictions. The framework relies on the adaptive model switching approach that at any point in time selects the most promising time series model out of the pool of many possible models, and consequently, combines advantages of the population, patient-specific and short-term individualized predictive models. We demonstrate that the adaptive model switching framework is very promising approach to support personalized time series prediction, and that it is able to outperform predictions based on pure population and patient-specific models, as well as, other patient-specific model adaptation strategies.

  2. Bayesian Poisson hierarchical models for crash data analysis: Investigating the impact of model choice on site-specific predictions.

    Science.gov (United States)

    Khazraee, S Hadi; Johnson, Valen; Lord, Dominique

    2018-08-01

    The Poisson-gamma (PG) and Poisson-lognormal (PLN) regression models are among the most popular means for motor vehicle crash data analysis. Both models belong to the Poisson-hierarchical family of models. While numerous studies have compared the overall performance of alternative Bayesian Poisson-hierarchical models, little research has addressed the impact of model choice on the expected crash frequency prediction at individual sites. This paper sought to examine whether there are any trends among candidate models predictions e.g., that an alternative model's prediction for sites with certain conditions tends to be higher (or lower) than that from another model. In addition to the PG and PLN models, this research formulated a new member of the Poisson-hierarchical family of models: the Poisson-inverse gamma (PIGam). Three field datasets (from Texas, Michigan and Indiana) covering a wide range of over-dispersion characteristics were selected for analysis. This study demonstrated that the model choice can be critical when the calibrated models are used for prediction at new sites, especially when the data are highly over-dispersed. For all three datasets, the PIGam model would predict higher expected crash frequencies than would the PLN and PG models, in order, indicating a clear link between the models predictions and the shape of their mixing distributions (i.e., gamma, lognormal, and inverse gamma, respectively). The thicker tail of the PIGam and PLN models (in order) may provide an advantage when the data are highly over-dispersed. The analysis results also illustrated a major deficiency of the Deviance Information Criterion (DIC) in comparing the goodness-of-fit of hierarchical models; models with drastically different set of coefficients (and thus predictions for new sites) may yield similar DIC values, because the DIC only accounts for the parameters in the lowest (observation) level of the hierarchy and ignores the higher levels (regression coefficients

  3. Theoretical Models of Deliberative Democracy: A Critical Analysis

    Directory of Open Access Journals (Sweden)

    Tutui Viorel

    2015-07-01

    Full Text Available Abstract: My paper focuses on presenting and analyzing some of the most important theoretical models of deliberative democracy and to emphasize their limits. Firstly, I will mention James Fishkin‟s account of deliberative democracy and its relations with other democratic models. He differentiates between four democratic theories: competitive democracy, elite deliberation, participatory democracy and deliberative democracy. Each of these theories makes an explicit commitment to two of the following four “principles”: political equality, participation, deliberation, nontyranny. Deliberative democracy is committed to political equality and deliberation. Secondly, I will present Philip Pettit‟s view concerning the main constraints of deliberative democracy: the inclusion constraint, the judgmental constraint and the dialogical constraint. Thirdly, I will refer to Amy Gutmann and Dennis Thompson‟s conception regarding the “requirements” or characteristics of deliberative democracy: the reason-giving requirement, the accessibility of reasons, the binding character of the decisions and the dynamic nature of the deliberative process. Finally, I will discuss Joshua Cohen‟s “ideal deliberative procedure” which has the following features: it is free, reasoned, the parties are substantively equal and the procedure aims to arrive at rationally motivated consensus. After presenting these models I will provide a critical analysis of each one of them with the purpose of revealing their virtues and limits. I will make some suggestions in order to combine the virtues of these models, to transcend their limitations and to offer a more systematical account of deliberative democracy. In the next four sections I will take into consideration four main strategies for combining political and epistemic values (“optimistic”, “deliberative”, “democratic” and “pragmatic” and the main objections they have to face. In the concluding section

  4. Modeling critical zone processes in intensively managed environments

    Science.gov (United States)

    Kumar, Praveen; Le, Phong; Woo, Dong; Yan, Qina

    2017-04-01

    Processes in the Critical Zone (CZ), which sustain terrestrial life, are tightly coupled across hydrological, physical, biochemical, and many other domains over both short and long timescales. In addition, vegetation acclimation resulting from elevated atmospheric CO2 concentration, along with response to increased temperature and altered rainfall pattern, is expected to result in emergent behaviors in ecologic and hydrologic functions, subsequently controlling CZ processes. We hypothesize that the interplay between micro-topographic variability and these emergent behaviors will shape complex responses of a range of ecosystem dynamics within the CZ. Here, we develop a modeling framework ('Dhara') that explicitly incorporates micro-topographic variability based on lidar topographic data with coupling of multi-layer modeling of the soil-vegetation continuum and 3-D surface-subsurface transport processes to study ecological and biogeochemical dynamics. We further couple a C-N model with a physically based hydro-geomorphologic model to quantify (i) how topographic variability controls the spatial distribution of soil moisture, temperature, and biogeochemical processes, and (ii) how farming activities modify the interaction between soil erosion and soil organic carbon (SOC) dynamics. To address the intensive computational demand from high-resolution modeling at lidar data scale, we use a hybrid CPU-GPU parallel computing architecture run over large supercomputing systems for simulations. Our findings indicate that rising CO2 concentration and air temperature have opposing effects on soil moisture, surface water and ponding in topographic depressions. Further, the relatively higher soil moisture and lower soil temperature contribute to decreased soil microbial activities in the low-lying areas due to anaerobic conditions and reduced temperatures. The decreased microbial relevant processes cause the reduction of nitrification rates, resulting in relatively lower nitrate

  5. Butterfly, Recurrence, and Predictability in Lorenz Models

    Science.gov (United States)

    Shen, B. W.

    2017-12-01

    Over the span of 50 years, the original three-dimensional Lorenz model (3DLM; Lorenz,1963) and its high-dimensional versions (e.g., Shen 2014a and references therein) have been used for improving our understanding of the predictability of weather and climate with a focus on chaotic responses. Although the Lorenz studies focus on nonlinear processes and chaotic dynamics, people often apply a "linear" conceptual model to understand the nonlinear processes in the 3DLM. In this talk, we present examples to illustrate the common misunderstandings regarding butterfly effect and discuss the importance of solutions' recurrence and boundedness in the 3DLM and high-dimensional LMs. The first example is discussed with the following folklore that has been widely used as an analogy of the butterfly effect: "For want of a nail, the shoe was lost.For want of a shoe, the horse was lost.For want of a horse, the rider was lost.For want of a rider, the battle was lost.For want of a battle, the kingdom was lost.And all for the want of a horseshoe nail."However, in 2008, Prof. Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability; and that the verse implicitly suggests that subsequent small events will not reverse the outcome (Lorenz, 2008). Lorenz's comments suggest that the verse neither describes negative (nonlinear) feedback nor indicates recurrence, the latter of which is required for the appearance of a butterfly pattern. The second example is to illustrate that the divergence of two nearby trajectories should be bounded and recurrent, as shown in Figure 1. Furthermore, we will discuss how high-dimensional LMs were derived to illustrate (1) negative nonlinear feedback that stabilizes the system within the five- and seven-dimensional LMs (5D and 7D LMs; Shen 2014a; 2015a; 2016); (2) positive nonlinear feedback that destabilizes the system within the 6D and 8D LMs (Shen 2015b; 2017); and (3

  6. Auditing predictive models : a case study in crop growth

    NARCIS (Netherlands)

    Metselaar, K.

    1999-01-01

    Methods were developed to assess and quantify the predictive quality of simulation models, with the intent to contribute to evaluation of model studies by non-scientists. In a case study, two models of different complexity, LINTUL and SUCROS87, were used to predict yield of forage maize

  7. Models for predicting compressive strength and water absorption of ...

    African Journals Online (AJOL)

    This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...

  8. Statistical and Machine Learning Models to Predict Programming Performance

    OpenAIRE

    Bergin, Susan

    2006-01-01

    This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...

  9. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...... studies on bankruptcy detection, seldom probabilistic approaches were carried out. In this paper we assume a probabilistic point-of-view by applying Gaussian Processes (GP) in the context of bankruptcy prediction, comparing it against the Support Vector Machines (SVM) and the Logistic Regression (LR......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...

  10. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  11. Predicting critical temperatures of iron(II) spin crossover materials: Density functional theory plus U approach

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yachao, E-mail: yczhang@nano.gznc.edu.cn [Guizhou Provincial Key Laboratory of Computational Nano-Material Science, Guizhou Normal College, Guiyang 550018, Guizhou (China)

    2014-12-07

    A first-principles study of critical temperatures (T{sub c}) of spin crossover (SCO) materials requires accurate description of the strongly correlated 3d electrons as well as much computational effort. This task is still a challenge for the widely used local density or generalized gradient approximations (LDA/GGA) and hybrid functionals. One remedy, termed density functional theory plus U (DFT+U) approach, introduces a Hubbard U term to deal with the localized electrons at marginal computational cost, while treats the delocalized electrons with LDA/GGA. Here, we employ the DFT+U approach to investigate the T{sub c} of a pair of iron(II) SCO molecular crystals (α and β phase), where identical constituent molecules are packed in different ways. We first calculate the adiabatic high spin-low spin energy splitting ΔE{sub HL} and molecular vibrational frequencies in both spin states, then obtain the temperature dependent enthalpy and entropy changes (ΔH and ΔS), and finally extract T{sub c} by exploiting the ΔH/T − T and ΔS − T relationships. The results are in agreement with experiment. Analysis of geometries and electronic structures shows that the local ligand field in the α phase is slightly weakened by the H-bondings involving the ligand atoms and the specific crystal packing style. We find that this effect is largely responsible for the difference in T{sub c} of the two phases. This study shows the applicability of the DFT+U approach for predicting T{sub c} of SCO materials, and provides a clear insight into the subtle influence of the crystal packing effects on SCO behavior.

  12. A new ensemble model for short term wind power prediction

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Razvan-Daniel; Felea, Ioan

    2012-01-01

    As the objective of this study, a non-linear ensemble system is used to develop a new model for predicting wind speed in short-term time scale. Short-term wind power prediction becomes an extremely important field of research for the energy sector. Regardless of the recent advancements in the re-search...... of prediction models, it was observed that different models have different capabilities and also no single model is suitable under all situations. The idea behind EPS (ensemble prediction systems) is to take advantage of the unique features of each subsystem to detain diverse patterns that exist in the dataset...

  13. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  14. Tuning critical failure with viscoelasticity: How aftershocks inhibit criticality in an analytical mean field model of fracture.

    Science.gov (United States)

    Baro Urbea, J.; Davidsen, J.

    2017-12-01

    The hypothesis of critical failure relates the presence of an ultimate stability point in the structural constitutive equation of materials to a divergence of characteristic scales in the microscopic dynamics responsible of deformation. Avalanche models involving critical failure have determined universality classes in different systems: from slip events in crystalline and amorphous materials to the jamming of granular media or the fracture of brittle materials. However, not all empirical failure processes exhibit the trademarks of critical failure. As an example, the statistical properties of ultrasonic acoustic events recorded during the failure of porous brittle materials are stationary, except for variations in the activity rate that can be interpreted in terms of aftershock and foreshock activity (J. Baró et al., PRL 2013).The rheological properties of materials introduce dissipation, usually reproduced in atomistic models as a hardening of the coarse-grained elements of the system. If the hardening is associated to a relaxation process the same mechanism is able to generate temporal correlations. We report the analytic solution of a mean field fracture model exemplifying how criticality and temporal correlations are tuned by transient hardening. We provide a physical meaning to the conceptual model by deriving the constitutive equation from the explicit representation of the transient hardening in terms of a generalized viscoelasticity model. The rate of 'aftershocks' is controlled by the temporal evolution of the viscoelastic creep. At the quasistatic limit, the moment release is invariant to rheology. Therefore, the lack of criticality is explained by the increase of the activity rate close to failure, i.e. 'foreshocks'. Finally, the avalanche propagation can be reinterpreted as a pure mathematical problem in terms of a stochastic counting process. The statistical properties depend only on the distance to a critical point, which is universal for any

  15. From Predictive Models to Instructional Policies

    Science.gov (United States)

    Rollinson, Joseph; Brunskill, Emma

    2015-01-01

    At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…

  16. Improving Critical Thinking Skills of College Students through RMS Model for Learning Basic Concepts in Science

    Science.gov (United States)

    Muhlisin, Ahmad; Susilo, Herawati; Amin, Mohamad; Rohman, Fatchur

    2016-01-01

    The purposes of this study were to: 1) Examine the effect of RMS learning model towards critical thinking skills. 2) Examine the effect of different academic abilities against critical thinking skills. 3) Examine the effect of the interaction between RMS learning model and different academic abilities against critical thinking skills. The research…

  17. Development of Critical Thinking with Metacognitive Regulation and Toulmin Model

    Science.gov (United States)

    Gotoh, Yasushi

    2017-01-01

    Developing critical thinking is an important factor in education. In this study, the author defines critical thinking as the set of skills and dispositions which enable one to solve problems logically and to attempt to reflect autonomously by means of metacognitive regulation of one's own problem-solving processes. To identify the validity and…

  18. The Complexity of Developmental Predictions from Dual Process Models

    Science.gov (United States)

    Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.

    2011-01-01

    Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…

  19. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model

  20. Sweat loss prediction using a multi-model approach.

    Science.gov (United States)

    Xu, Xiaojiang; Santee, William R

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  1. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  2. a model for the determination of the critical buckling load of self

    African Journals Online (AJOL)

    HP

    Considering the widespread use of this type of structure and the critical role it ... proposed by the model for the critical buckling load of self- supporting lattice tower, whose equivalent solid beam- ... stiffness, both material and geometric, [5, 6].

  3. Knowledge to Predict Pathogens: Legionella pneumophila Lifecycle Critical Review Part I Uptake into Host Cells

    Directory of Open Access Journals (Sweden)

    Alexis L. Mraz

    2018-01-01

    Full Text Available Legionella pneumophila (L. pneumophila is an infectious disease agent of increasing concern due to its ability to cause Legionnaires’ Disease, a severe community pneumonia, and the difficulty in controlling it within water systems. L. pneumophila thrives within the biofilm of premise plumbing systems, utilizing protozoan hosts for protection from disinfectants and other environmental stressors. While there is a great deal of information regarding how L. pneumophila interacts with protozoa and human macrophages (host for human infection, the ability to use this data in a model to attempt to predict a concentration of L. pneumophila in a water system is not known. The lifecycle of L. pneumophila within host cells involves three processes: uptake, growth, and egression from the host cell. The complexity of these three processes would risk conflation of the concepts; therefore, this review details the available information regarding how L. pneumophila invades host cells (uptake within the context of data needed to model this process, while a second review will focus on growth and egression. The overall intent of both reviews is to detail how the steps in L. pneumophila’s lifecycle in drinking water systems affect human infectivity, as opposed to detailing just its growth and persistence in drinking water systems.

  4. Quantifying and modelling the carbon sequestration capacity of seagrass meadows--a critical assessment.

    Science.gov (United States)

    Macreadie, P I; Baird, M E; Trevathan-Tackett, S M; Larkum, A W D; Ralph, P J

    2014-06-30

    Seagrasses are among the planet's most effective natural ecosystems for sequestering (capturing and storing) carbon (C); but if degraded, they could leak stored C into the atmosphere and accelerate global warming. Quantifying and modelling the C sequestration capacity is therefore critical for successfully managing seagrass ecosystems to maintain their substantial abatement potential. At present, there is no mechanism to support carbon financing linked to seagrass. For seagrasses to be recognised by the IPCC and the voluntary C market, standard stock assessment methodologies and inventories of seagrass C stocks are required. Developing accurate C budgets for seagrass meadows is indeed complex; we discuss these complexities, and, in addition, we review techniques and methodologies that will aid development of C budgets. We also consider a simple process-based data assimilation model for predicting how seagrasses will respond to future change, accompanied by a practical list of research priorities. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Modelling critical degrees of saturation of porous building materials subjected to freezing

    DEFF Research Database (Denmark)

    Hansen, Ernst Jan De Place

    1996-01-01

    of SCR based on fracture mechanics and phase geometry of two-phase materials has been developed.The degradation is modelled as being caused by different eigenstrains of the pore phase and the solid phase when freezing, leading to stress concentrations and crack propagation. Simplifications are made......Frost resistance of porous materials can be characterized by the critical degree of saturation, SCR, and the actual degree of saturation, SACT. An experimental determination of SCR is very laborious and therefore only seldom used when testing frost resistance. A theoretical model for prediction...... to describe the development of stresses and the pore structure, because a mathematical description of the physical theories explaining the process of freezing of water in porous materials is lacking.Calculations are based on porosity, modulus of elasticity and tensile strength, and parameters characterizing...

  6. Modeling of Complex Life Cycle Prediction Based on Cell Division

    Directory of Open Access Journals (Sweden)

    Fucheng Zhang

    2017-01-01

    Full Text Available Effective fault diagnosis and reasonable life expectancy are of great significance and practical engineering value for the safety, reliability, and maintenance cost of equipment and working environment. At present, the life prediction methods of the equipment are equipment life prediction based on condition monitoring, combined forecasting model, and driven data. Most of them need to be based on a large amount of data to achieve the problem. For this issue, we propose learning from the mechanism of cell division in the organism. We have established a moderate complexity of life prediction model across studying the complex multifactor correlation life model. In this paper, we model the life prediction of cell division. Experiments show that our model can effectively simulate the state of cell division. Through the model of reference, we will use it for the equipment of the complex life prediction.

  7. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  8. Predictive modeling and reducing cyclic variability in autoignition engines

    Science.gov (United States)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  9. Dynamic Simulation of Human Gait Model With Predictive Capability.

    Science.gov (United States)

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  10. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  11. A critical review of cell culture strategies for modelling intracortical brain implant material reactions.

    Science.gov (United States)

    Gilmour, A D; Woolley, A J; Poole-Warren, L A; Thomson, C E; Green, R A

    2016-06-01

    The capacity to predict in vivo responses to medical devices in humans currently relies greatly on implantation in animal models. Researchers have been striving to develop in vitro techniques that can overcome the limitations associated with in vivo approaches. This review focuses on a critical analysis of the major in vitro strategies being utilized in laboratories around the world to improve understanding of the biological performance of intracortical, brain-implanted microdevices. Of particular interest to the current review are in vitro models for studying cell responses to penetrating intracortical devices and their materials, such as electrode arrays used for brain computer interface (BCI) and deep brain stimulation electrode probes implanted through the cortex. A background on the neural interface challenge is presented, followed by discussion of relevant in vitro culture strategies and their advantages and disadvantages. Future development of 2D culture models that exhibit developmental changes capable of mimicking normal, postnatal development will form the basis for more complex accurate predictive models in the future. Although not within the scope of this review, innovations in 3D scaffold technologies and microfluidic constructs will further improve the utility of in vitro approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...

  13. Square-lattice random Potts model: criticality and pitchfork bifurcation

    International Nuclear Information System (INIS)

    Costa, U.M.S.; Tsallis, C.

    1983-01-01

    Within a real space renormalization group framework based on self-dual clusters, the criticality of the quenched bond-mixed q-state Potts ferromagnet on square lattice is discussed. On qualitative grounds it is exhibited that the crossover from the pure fixed point to the random one occurs, while q increases, through a pitchfork bifurcation; the relationship with Harris criterion is analyzed. On quantitative grounds high precision numerical values are presented for the critical temperatures corresponding to various concentrations of the coupling constants J 1 and J 2 , and various ratios J 1 /J 2 . The pure, random and crossover critical exponents are discussed as well. (Author) [pt

  14. A model to predict the power output from wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Riso National Lab., Roskilde (Denmark)

    1997-12-31

    This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.

  15. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of

  16. Ocean wave prediction using numerical and neural network models

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    This paper presents an overview of the development of the numerical wave prediction models and recently used neural networks for ocean wave hindcasting and forecasting. The numerical wave models express the physical concepts of the phenomena...

  17. Predictive analytics technology review: Similarity-based modeling and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, James; Doan, Don; Gandhi, Devang; Nieman, Bill

    2010-09-15

    Over 11 years ago, SmartSignal introduced Predictive Analytics for eliminating equipment failures, using its patented SBM technology. SmartSignal continues to lead and dominate the market and, in 2010, went one step further and introduced Predictive Diagnostics. Now, SmartSignal is combining Predictive Diagnostics with RCM methodology and industry expertise. FMEA logic reengineers maintenance work management, eliminates unneeded inspections, and focuses efforts on the real issues. This integrated solution significantly lowers maintenance costs, protects against critical asset failures, and improves commercial availability, and reduces work orders 20-40%. Learn how.

  18. Quantitative research on critical thinking and predicting nursing students' NCLEX-RN performance.

    Science.gov (United States)

    Romeo, Elizabeth M

    2010-07-01

    The concept of critical thinking has been influential in several disciplines. Both education and nursing in general have been attempting to define, teach, and measure this concept for decades. Nurse educators realize that critical thinking is the cornerstone of the objectives and goals for nursing students. The purpose of this article is to review and analyze quantitative research findings relevant to the measurement of critical thinking abilities and skills in undergraduate nursing students and the usefulness of critical thinking as a predictor of National Council Licensure Examination-Registered Nurse (NCLEX-RN) performance. The specific issues that this integrative review examined include assessment and analysis of the theoretical and operational definitions of critical thinking, theoretical frameworks used to guide the studies, instruments used to evaluate critical thinking skills and abilities, and the role of critical thinking as a predictor of NCLEX-RN outcomes. A list of key assumptions related to critical thinking was formulated. The limitations and gaps in the literature were identified, as well as the types of future research needed in this arena. Copyright 2010, SLACK Incorporated.

  19. A mathematical model for predicting earthquake occurrence ...

    African Journals Online (AJOL)

    We consider the continental crust under damage. We use the observed results of microseism in many seismic stations of the world which was established to study the time series of the activities of the continental crust with a view to predicting possible time of occurrence of earthquake. We consider microseism time series ...

  20. Model for predicting the injury severity score.

    Science.gov (United States)

    Hagiwara, Shuichi; Oshima, Kiyohiro; Murata, Masato; Kaneko, Minoru; Aoki, Makoto; Kanbe, Masahiko; Nakamura, Takuro; Ohyama, Yoshio; Tamura, Jun'ichi

    2015-07-01

    To determine the formula that predicts the injury severity score from parameters that are obtained in the emergency department at arrival. We reviewed the medical records of trauma patients who were transferred to the emergency department of Gunma University Hospital between January 2010 and December 2010. The injury severity score, age, mean blood pressure, heart rate, Glasgow coma scale, hemoglobin, hematocrit, red blood cell count, platelet count, fibrinogen, international normalized ratio of prothrombin time, activated partial thromboplastin time, and fibrin degradation products, were examined in those patients on arrival. To determine the formula that predicts the injury severity score, multiple linear regression analysis was carried out. The injury severity score was set as the dependent variable, and the other parameters were set as candidate objective variables. IBM spss Statistics 20 was used for the statistical analysis. Statistical significance was set at P  Watson ratio was 2.200. A formula for predicting the injury severity score in trauma patients was developed with ordinary parameters such as fibrin degradation products and mean blood pressure. This formula is useful because we can predict the injury severity score easily in the emergency department.