WorldWideScience

Sample records for model predicts critical

  1. Critical appraisal and data extraction for systematic reviews of prediction modelling studies: the CHARMS checklist.

    Directory of Open Access Journals (Sweden)

    Karel G M Moons

    2014-10-01

    Full Text Available Carl Moons and colleagues provide a checklist and background explanation for critically appraising and extracting data from systematic reviews of prognostic and diagnostic prediction modelling studies. Please see later in the article for the Editors' Summary.

  2. Development of a neural network model for predicting glucose levels in a surgical critical care setting

    Directory of Open Access Journals (Sweden)

    Pappada Scott M

    2010-09-01

    Full Text Available Abstract Development of neural network models for the prediction of glucose levels in critically ill patients through the application of continuous glucose monitoring may provide enhanced patient outcomes. Here we demonstrate the utilization of a predictive model in real-time bedside monitoring. Such modeling may provide intelligent/directed therapy recommendations, guidance, and ultimately automation, in the near future as a means of providing optimal patient safety and care in the provision of insulin drips to prevent hyperglycemia and hypoglycemia.

  3. Nuclear criticality predictability

    International Nuclear Information System (INIS)

    Briggs, J.B.

    1999-01-01

    As a result of lots of efforts, a large portion of the tedious and redundant research and processing of critical experiment data has been eliminated. The necessary step in criticality safety analyses of validating computer codes with benchmark critical data is greatly streamlined, and valuable criticality safety experimental data is preserved. Criticality safety personnel in 31 different countries are now using the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'. Much has been accomplished by the work of the ICSBEP. However, evaluation and documentation represents only one element of a successful Nuclear Criticality Safety Predictability Program and this element only exists as a separate entity, because this work was not completed in conjunction with the experimentation process. I believe; however, that the work of the ICSBEP has also served to unify the other elements of nuclear criticality predictability. All elements are interrelated, but for a time it seemed that communications between these elements was not adequate. The ICSBEP has highlighted gaps in data, has retrieved lost data, has helped to identify errors in cross section processing codes, and has helped bring the international criticality safety community together in a common cause as true friends and colleagues. It has been a privilege to associate with those who work so diligently to make the project a success. (J.P.N.)

  4. Estimating long-term survival of critically ill patients: the PREDICT model.

    Directory of Open Access Journals (Sweden)

    Kwok M Ho

    Full Text Available BACKGROUND: Long-term survival outcome of critically ill patients is important in assessing effectiveness of new treatments and making treatment decisions. We developed a prognostic model for estimation of long-term survival of critically ill patients. METHODOLOGY AND PRINCIPAL FINDINGS: This was a retrospective linked data cohort study involving 11,930 critically ill patients who survived more than 5 days in a university teaching hospital in Western Australia. Older age, male gender, co-morbidities, severe acute illness as measured by Acute Physiology and Chronic Health Evaluation II predicted mortality, and more days of vasopressor or inotropic support, mechanical ventilation, and hemofiltration within the first 5 days of intensive care unit admission were associated with a worse long-term survival up to 15 years after the onset of critical illness. Among these seven pre-selected predictors, age (explained 50% of the variability of the model, hazard ratio [HR] between 80 and 60 years old = 1.95 and co-morbidity (explained 27% of the variability, HR between Charlson co-morbidity index 5 and 0 = 2.15 were the most important determinants. A nomogram based on the pre-selected predictors is provided to allow estimation of the median survival time and also the 1-year, 3-year, 5-year, 10-year, and 15-year survival probabilities for a patient. The discrimination (adjusted c-index = 0.757, 95% confidence interval 0.745-0.769 and calibration of this prognostic model were acceptable. SIGNIFICANCE: Age, gender, co-morbidities, severity of acute illness, and the intensity and duration of intensive care therapy can be used to estimate long-term survival of critically ill patients. Age and co-morbidity are the most important determinants of long-term prognosis of critically ill patients.

  5. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    Science.gov (United States)

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  6. A mathematical model for predicting glucose levels in critically-ill patients: the PIGnOLI model

    Directory of Open Access Journals (Sweden)

    Zhongheng Zhang

    2015-06-01

    Full Text Available Background and Objectives. Glycemic control is of paramount importance in the intensive care unit. Presently, several BG control algorithms have been developed for clinical trials, but they are mostly based on experts’ opinion and consensus. There are no validated models predicting how glucose levels will change after initiating of insulin infusion in critically ill patients. The study aimed to develop an equation for initial insulin dose setting.Methods. A large critical care database was employed for the study. Linear regression model fitting was employed. Retested blood glucose was used as the independent variable. Insulin rate was forced into the model. Multivariable fractional polynomials and interaction terms were used to explore the complex relationships among covariates. The overall fit of the model was examined by using residuals and adjusted R-squared values. Regression diagnostics were used to explore the influence of outliers on the model.Main Results. A total of 6,487 ICU admissions requiring insulin pump therapy were identified. The dataset was randomly split into two subsets at 7 to 3 ratio. The initial model comprised fractional polynomials and interactions terms. However, this model was not stable by excluding several outliers. I fitted a simple linear model without interaction. The selected prediction model (Predicting Glucose Levels in ICU, PIGnOLI included variables of initial blood glucose, insulin rate, PO volume, total parental nutrition, body mass index (BMI, lactate, congestive heart failure, renal failure, liver disease, time interval of BS recheck, dextrose rate. Insulin rate was significantly associated with blood glucose reduction (coefficient: −0.52, 95% CI [−1.03, −0.01]. The parsimonious model was well validated with the validation subset, with an adjusted R-squared value of 0.8259.Conclusion. The study developed the PIGnOLI model for the initial insulin dose setting. Furthermore, experimental study is

  7. Constructing an everywhere and locally relevant predictive model of the West-African critical zone

    Science.gov (United States)

    Hector, B.; Cohard, J. M.; Pellarin, T.; Maxwell, R. M.; Cappelaere, B.; Demarty, J.; Grippa, M.; Kergoat, L.; Lebel, T.; Mamadou, O.; Mougin, E.; Panthou, G.; Peugeot, C.; Vandervaere, J. P.; Vischel, T.; Vouillamoz, J. M.

    2017-12-01

    Considering water resources and hydrologic hazards, West Africa is among the most vulnerable regions to face both climatic (e.g. with the observed intensification of precipitation) and anthropogenic changes. With +3% of demographic rate, the region experiences rapid land use changes and increased pressure on surface and groundwater resources with observed consequences on the hydrological cycle (water table rise result of the sahelian paradox, increase in flood occurrence, etc.) Managing large hydrosystems (such as transboundary aquifers or rivers basins as the Niger river) requires anticipation of such changes. However, the region significantly lacks observations, for constructing and validating critical zone (CZ) models able to predict future hydrologic regime, but also comprises hydrosystems which encompass strong environmental gradients (e.g. geological, climatic, ecological) with highly different dominating hydrological processes. We address these issues by constructing a high resolution (1 km²) regional scale physically-based model using ParFlow-CLM which allows modeling a wide range of processes without prior knowledge on their relative dominance. Our approach combines multiple scale modeling from local to meso and regional scales within the same theoretical framework. Local and meso-scale models are evaluated thanks to the rich AMMA-CATCH CZ observation database which covers 3 supersites with contrasted environments in Benin (Lat.: 9.8°N), Niger (Lat.: 13.3°N) and Mali (Lat.: 15.3°N). At the regional scale the lack of relevant map of soil hydrodynamic parameters is addressed using remote sensing data assimilation. Our first results show the model's ability to reproduce the known dominant hydrological processes (runoff generation, ET, groundwater recharge…) across the major West-African regions and allow us to conduct virtual experiments to explore the impact of global changes on the hydrosystems. This approach is a first step toward the construction of

  8. A dry-spot model for the prediction of critical heat flux in water boiling in bubbly flow regime

    International Nuclear Information System (INIS)

    Ha, Sang Jun; No, Hee Cheon

    1997-01-01

    This paper presents a prediction of critical heat flux (CHF) in bubbly flow regime using dry-spot model proposed recently by authors for pool and flow boiling CHF and existing correlations for forced convective heat transfer coefficient, active site density and bubble departure diameter in nucleate boiling region. Without any empirical constants always present in earlier models, comparisons of the model predictions with experimental data for upward flow of water in vertical, uniformly-heated round tubes are performed and show a good agreement. The parametric trends of CHF have been explored with respect to variation in pressure, tube diameter and length, mass flux and inlet subcooling

  9. Comprehensive and critical review of the predictive properties of the various mass models

    International Nuclear Information System (INIS)

    Haustein, P.E.

    1984-01-01

    Since the publication of the 1975 Mass Predictions approximately 300 new atomic masses have been reported. These data come from a variety of experimental studies using diverse techniques and they span a mass range from the lightest isotopes to the very heaviest. It is instructive to compare these data with the 1975 predictions and several others (Moeller and Nix, Monahan, Serduke, Uno and Yamada which appeared latter. Extensive numerical and graphical analyses have been performed to examine the quality of the mass predictions from the various models and to identify features in these models that require correction. In general, there is only rough correlation between the ability of a particular model to reproduce the measured mass surface which had been used to refine its adjustable parameters and that model's ability to predict correctly the new masses. For some models distinct systematic features appear when the new mass data are plotted as functions of relevant physical variables. Global intercomparisons of all the models are made first, followed by several examples of types of analysis performed with individual mass models

  10. Predicting the local impacts of energy development: a critical guide to forecasting methods and models

    Energy Technology Data Exchange (ETDEWEB)

    Sanderson, D.; O' Hare, M.

    1977-05-01

    Models forecasting second-order impacts from energy development vary in their methodology, output, assumptions, and quality. As a rough dichotomy, they either simulate community development over time or combine various submodels providing community snapshots at selected points in time. Using one or more methods - input/output models, gravity models, econometric models, cohort-survival models, or coefficient models - they estimate energy-development-stimulated employment, population, public and private service needs, and government revenues and expenditures at some future time (ranging from annual to average year predictions) and for different governmental jurisdictions (municipal, county, state, etc.). Underlying assumptions often conflict, reflecting their different sources - historical data, comparative data, surveys, and judgments about future conditions. Model quality, measured by special features, tests, exportability and usefulness to policy-makers, reveals careful and thorough work in some cases and hurried operations with insufficient in-depth analysis in others.

  11. Modelling the hare and the tortoise: predicting the range of in-vehicle task times using critical path analysis.

    Science.gov (United States)

    Harvey, Catherine; Stanton, Neville A

    2013-01-01

    Analytic models can enable predictions about important aspects of the usability of in-vehicle information systems (IVIS) to be made at an early stage of the product development process. Task times provide a quantitative measure of user performance and are therefore important in the evaluation of IVIS usability. In this study, critical path analysis (CPA) was used to model IVIS task times in a stationary vehicle, and the technique was extended to produce predictions for slowperson and fastperson performance, as well as average user (middleperson) performance. The CPA-predicted task times were compared to task times recorded in an empirical simulator study of IVIS interaction, and the predicted times were, on average, within acceptable precision limits. This work forms the foundation for extension of the CPA model to predict IVIS task times in a moving vehicle, to reflect the demands of the dual-task driving scenario. The CPA method was extended for the prediction of slowperson and fastperson IVIS task times. Comparison of the model predictions with empirical data demonstrated acceptable precision. The CPA model can be used in early IVIS evaluation; however, there is a need to extend it to represent the dual-task driving scenario.

  12. Evaluation of cloud prediction and determination of critical relative humidity for a mesoscale numerical weather prediction model

    Energy Technology Data Exchange (ETDEWEB)

    Seaman, N.L.; Guo, Z.; Ackerman, T.P. [Pennsylvania State Univ., University Park, PA (United States)

    1996-04-01

    Predictions of cloud occurrence and vertical location from the Pennsylvannia State University/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) were evaluated statistically using cloud observations obtained at Coffeyville, Kansas, as part of the Second International satellite Cloud Climatology Project Regional Experiment campaign. Seventeen cases were selected for simulation during a November-December 1991 field study. MM5 was used to produce two sets of 36-km simulations, one with and one without four-dimensional data assimilation (FDDA), and a set of 12-km simulations without FDDA, but nested within the 36-km FDDA runs.

  13. Continuous Automated Model EvaluatiOn (CAMEO) complementing the critical assessment of structure prediction in CASP12.

    Science.gov (United States)

    Haas, Jürgen; Barbato, Alessandro; Behringer, Dario; Studer, Gabriel; Roth, Steven; Bertoni, Martino; Mostaguir, Khaled; Gumienny, Rafal; Schwede, Torsten

    2018-03-01

    Every second year, the community experiment "Critical Assessment of Techniques for Structure Prediction" (CASP) is conducting an independent blind assessment of structure prediction methods, providing a framework for comparing the performance of different approaches and discussing the latest developments in the field. Yet, developers of automated computational modeling methods clearly benefit from more frequent evaluations based on larger sets of data. The "Continuous Automated Model EvaluatiOn (CAMEO)" platform complements the CASP experiment by conducting fully automated blind prediction assessments based on the weekly pre-release of sequences of those structures, which are going to be published in the next release of the PDB Protein Data Bank. CAMEO publishes weekly benchmarking results based on models collected during a 4-day prediction window, on average assessing ca. 100 targets during a time frame of 5 weeks. CAMEO benchmarking data is generated consistently for all participating methods at the same point in time, enabling developers to benchmark and cross-validate their method's performance, and directly refer to the benchmarking results in publications. In order to facilitate server development and promote shorter release cycles, CAMEO sends weekly email with submission statistics and low performance warnings. Many participants of CASP have successfully employed CAMEO when preparing their methods for upcoming community experiments. CAMEO offers a variety of scores to allow benchmarking diverse aspects of structure prediction methods. By introducing new scoring schemes, CAMEO facilitates new development in areas of active research, for example, modeling quaternary structure, complexes, or ligand binding sites. © 2017 Wiley Periodicals, Inc.

  14. A dry-spot model for the prediction of critical heat flux in water boiling in bubbly flow regime

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Sang Jun; No, Hee Cheon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    This paper presents a prediction of critical heat flux (CHF) in bubbly flow regime using dry-spot model proposed recently by authors for pool and flow boiling CHF and existing correlations for forced convective heat transfer coefficient, active site density and bubble departure diameter in nucleate boiling region. Without any empirical constants always present in earlier models, comparisons of the model predictions with experimental data for upward flow of water in vertical, uniformly-heated round tubes are performed and show a good agreement. The parametric trends of CHF have been explored with respect to variations in pressure, tube diameter and length, mass flux and inlet subcooling. 16 refs., 6 figs., 1 tab. (Author)

  15. A study on the development of advanced models to predict the critical heat flux for water and liquid metals

    International Nuclear Information System (INIS)

    Lee, Yong Bum

    1994-02-01

    The critical heat flux (CHF) phenomenon in the two-phase convective flows has been an important issue in the fields of design and safety analysis of light water reactor (LWR) as well as sodium cooled liquid metal fast breeder reactor (LMFBR). Especially in the LWR application many physical aspects of the CHF phenomenon are understood and reliable correlations and mechanistic models to predict the CHF condition have been proposed. However, there are few correlations and models which are applicable to liquid metals. Compared with water, liquid metals show a divergent picture for boiling pattern. Therefore, the CHF conditions obtained from investigations with water cannot be applied to liquid metals. In this work a mechanistic model to predict the CHF of water and a correlation for liquid metals are developed. First, a mechanistic model to predict the CHF in flow boiling at low quality was developed based on the liquid sublayer dryout mechanism. In this approach the CHF is assumed to occur when a vapor blanket isolates the liquid sublayer from bulk liquid and then the liquid entering the sublayer falls short of balancing the rate of sublayer dryout by vaporization. Therefore, the vapor blanket velocity is the key parameter. In this work the vapor blanket velocity is theoretically determined based on mass, energy, and momentum balance and finally the mechanistic model to predict the CHF in flow boiling at low quality is developed. The accuracy of the present model is evaluated by comparing model predictions with the experimental data and tabular data of look-up tables. The predictions of the present model agree well with extensive CHF data. In the latter part a correlation to predict the CHF for liquid metals is developed based on the flow excursion mechanism. By using Baroczy two-phase frictional pressure drop correlation and Ledinegg instability criterion, the relationship between the CHF of liquid metals and the principal parameters is derived and finally the

  16. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...... seen as necessary in order to identify aggregate level effects of policy measures, but are questioned by many advocates of critical realist ontology. Using research into the relationship between urban structure and travel as an example, the paper discusses relevant research methods and the kinds...

  17. Development and validation of a prediction model for insulin-associated hypoglycemia in non-critically ill hospitalized adults.

    Science.gov (United States)

    Mathioudakis, Nestoras Nicolas; Everett, Estelle; Routh, Shuvodra; Pronovost, Peter J; Yeh, Hsin-Chieh; Golden, Sherita Hill; Saria, Suchi

    2018-01-01

    To develop and validate a multivariable prediction model for insulin-associated hypoglycemia in non-critically ill hospitalized adults. We collected pharmacologic, demographic, laboratory, and diagnostic data from 128 657 inpatient days in which at least 1 unit of subcutaneous insulin was administered in the absence of intravenous insulin, total parenteral nutrition, or insulin pump use (index days). These data were used to develop multivariable prediction models for biochemical and clinically significant hypoglycemia (blood glucose (BG) of ≤70 mg/dL and model development and validation, respectively. Using predictors of age, weight, admitting service, insulin doses, mean BG, nadir BG, BG coefficient of variation (CV BG ), diet status, type 1 diabetes, type 2 diabetes, acute kidney injury, chronic kidney disease (CKD), liver disease, and digestive disease, our model achieved a c-statistic of 0.77 (95% CI 0.75 to 0.78), positive likelihood ratio (+LR) of 3.5 (95% CI 3.4 to 3.6) and negative likelihood ratio (-LR) of 0.32 (95% CI 0.30 to 0.35) for prediction of biochemical hypoglycemia. Using predictors of sex, weight, insulin doses, mean BG, nadir BG, CV BG , diet status, type 1 diabetes, type 2 diabetes, CKD stage, and steroid use, our model achieved a c-statistic of 0.80 (95% CI 0.78 to 0.82), +LR of 3.8 (95% CI 3.7 to 4.0) and -LR of 0.2 (95% CI 0.2 to 0.3) for prediction of clinically significant hypoglycemia. Hospitalized patients at risk of insulin-associated hypoglycemia can be identified using validated prediction models, which may support the development of real-time preventive interventions.

  18. Investigating Predictive Role of Critical Thinking on Metacognition with Structural Equation Modeling

    Science.gov (United States)

    Arslan, Serhat

    2015-01-01

    The purpose of this study is to examine the relationships between critical thinking and metacognition. The sample of study consists of 390 university students who were enrolled in different programs at Sakarya University, in Turkey. In this study, the Critical Thinking Disposition Scale and Metacognitive Thinking Scale were used. The relationships…

  19. External validation of the Intensive Care National Audit & Research Centre (ICNARC) risk prediction model in critical care units in Scotland.

    Science.gov (United States)

    Harrison, David A; Lone, Nazir I; Haddow, Catriona; MacGillivray, Moranne; Khan, Angela; Cook, Brian; Rowan, Kathryn M

    2014-01-01

    Risk prediction models are used in critical care for risk stratification, summarising and communicating risk, supporting clinical decision-making and benchmarking performance. However, they require validation before they can be used with confidence, ideally using independently collected data from a different source to that used to develop the model. The aim of this study was to validate the Intensive Care National Audit & Research Centre (ICNARC) model using independently collected data from critical care units in Scotland. Data were extracted from the Scottish Intensive Care Society Audit Group (SICSAG) database for the years 2007 to 2009. Recoding and mapping of variables was performed, as required, to apply the ICNARC model (2009 recalibration) to the SICSAG data using standard computer algorithms. The performance of the ICNARC model was assessed for discrimination, calibration and overall fit and compared with that of the Acute Physiology And Chronic Health Evaluation (APACHE) II model. There were 29,626 admissions to 24 adult, general critical care units in Scotland between 1 January 2007 and 31 December 2009. After exclusions, 23,269 admissions were included in the analysis. The ICNARC model outperformed APACHE II on measures of discrimination (c index 0.848 versus 0.806), calibration (Hosmer-Lemeshow chi-squared statistic 18.8 versus 214) and overall fit (Brier's score 0.140 versus 0.157; Shapiro's R 0.652 versus 0.621). Model performance was consistent across the three years studied. The ICNARC model performed well when validated in an external population to that in which it was developed, using independently collected data.

  20. Predictive modelling of survival and length of stay in critically ill patients using sequential organ failure scores.

    Science.gov (United States)

    Houthooft, Rein; Ruyssinck, Joeri; van der Herten, Joachim; Stijven, Sean; Couckuyt, Ivo; Gadeyne, Bram; Ongenae, Femke; Colpaert, Kirsten; Decruyenaere, Johan; Dhaene, Tom; De Turck, Filip

    2015-03-01

    The length of stay of critically ill patients in the intensive care unit (ICU) is an indication of patient ICU resource usage and varies considerably. Planning of postoperative ICU admissions is important as ICUs often have no nonoccupied beds available. Estimation of the ICU bed availability for the next coming days is entirely based on clinical judgement by intensivists and therefore too inaccurate. For this reason, predictive models have much potential for improving planning for ICU patient admission. Our goal is to develop and optimize models for patient survival and ICU length of stay (LOS) based on monitored ICU patient data. Furthermore, these models are compared on their use of sequential organ failure (SOFA) scores as well as underlying raw data as input features. Different machine learning techniques are trained, using a 14,480 patient dataset, both on SOFA scores as well as their underlying raw data values from the first five days after admission, in order to predict (i) the patient LOS, and (ii) the patient mortality. Furthermore, to help physicians in assessing the prediction credibility, a probabilistic model is tailored to the output of our best-performing model, assigning a belief to each patient status prediction. A two-by-two grid is built, using the classification outputs of the mortality and prolonged stay predictors to improve the patient LOS regression models. For predicting patient mortality and a prolonged stay, the best performing model is a support vector machine (SVM) with GA,D=65.9% (area under the curve (AUC) of 0.77) and GS,L=73.2% (AUC of 0.82). In terms of LOS regression, the best performing model is support vector regression, achieving a mean absolute error of 1.79 days and a median absolute error of 1.22 days for those patients surviving a nonprolonged stay. Using a classification grid based on the predicted patient mortality and prolonged stay, allows more accurate modeling of the patient LOS. The detailed models allow to support

  1. Sensitivity of predictions in an effective model: Application to the chiral critical end point position in the Nambu-Jona-Lasinio model

    International Nuclear Information System (INIS)

    Biguet, Alexandre; Hansen, Hubert; Brugiere, Timothee; Costa, Pedro; Borgnat, Pierre

    2015-01-01

    The measurement of the position of the chiral critical end point (CEP) in the QCD phase diagram is under debate. While it is possible to predict its position by using effective models specifically built to reproduce some of the features of the underlying theory (QCD), the quality of the predictions (e.g., the CEP position) obtained by such effective models, depends on whether solving the model equations constitute a well- or ill-posed inverse problem. Considering these predictions as being inverse problems provides tools to evaluate if the problem is ill-conditioned, meaning that infinitesimal variations of the inputs of the model can cause comparatively large variations of the predictions. If it is ill-conditioned, it has major consequences because of finite variations that could come from experimental and/or theoretical errors. In the following, we shall apply such a reasoning on the predictions of a particular Nambu-Jona-Lasinio model within the mean field + ring approximations, with special attention to the prediction of the chiral CEP position in the (T-μ) plane. We find that the problem is ill-conditioned (i.e. very sensitive to input variations) for the T-coordinate of the CEP, whereas, it is well-posed for the μ-coordinate of the CEP. As a consequence, when the chiral condensate varies in a 10MeV range, μ CEP varies far less. As an illustration to understand how problematic this could be, we show that the main consequence when taking into account finite variation of the inputs, is that the existence of the CEP itself cannot be predicted anymore: for a deviation as low as 0.6% with respect to vacuum phenomenology (well within the estimation of the first correction to the ring approximation) the CEP may or may not exist. (orig.)

  2. A general unified non-equilibrium model for predicting saturated and subcooled critical two-phase flow rates through short and long tubes

    Energy Technology Data Exchange (ETDEWEB)

    Fraser, D.W.H. [Univ. of British Columbia (Canada); Abdelmessih, A.H. [Univ. of Toronto, Ontario (Canada)

    1995-09-01

    A general unified model is developed to predict one-component critical two-phase pipe flow. Modelling of the two-phase flow is accomplished by describing the evolution of the flow between the location of flashing inception and the exit (critical) plane. The model approximates the nonequilibrium phase change process via thermodynamic equilibrium paths. Included are the relative effects of varying the location of flashing inception, pipe geometry, fluid properties and length to diameter ratio. The model predicts that a range of critical mass fluxes exist and is bound by a maximum and minimum value for a given thermodynamic state. This range is more pronounced at lower subcooled stagnation states and can be attributed to the variation in the location of flashing inception. The model is based on the results of an experimental study of the critical two-phase flow of saturated and subcooled water through long tubes. In that study, the location of flashing inception was accurately controlled and adjusted through the use of a new device. The data obtained revealed that for fixed stagnation conditions, the maximum critical mass flux occurred with flashing inception located near the pipe exit; while minimum critical mass fluxes occurred with the flashing front located further upstream. Available data since 1970 for both short and long tubes over a wide range of conditions are compared with the model predictions. This includes test section L/D ratios from 25 to 300 and covers a temperature and pressure range of 110 to 280{degrees}C and 0.16 to 6.9 MPa. respectively. The predicted maximum and minimum critical mass fluxes show an excellent agreement with the range observed in the experimental data.

  3. Acute Pancreatitis as a Model to Predict Transition of Systemic Inflammation to Organ Failure in Trauma and Critical Illness

    Science.gov (United States)

    2017-10-01

    AWARD NUMBER: W81XWH-14-1-0376 TITLE: Acute Pancreatitis as a Model to Predict Transition of Systemic Inflammation to Organ Failgure in Trauma...COVERED 22 Sep 2016 - 21 Sep 2017 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Acute Pancreatitis as a Model to Predict Transition of Systemic...Distribution Unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Trauma, extensive burns, bacterial infections, and acute pancreatitis (AP) are common

  4. Using discharge data to reduce structural deficits in a hydrological model with a Bayesian inference approach and the implications for the prediction of critical source areas

    Science.gov (United States)

    Frey, M. P.; Stamm, C.; Schneider, M. K.; Reichert, P.

    2011-12-01

    A distributed hydrological model was used to simulate the distribution of fast runoff formation as a proxy for critical source areas for herbicide pollution in a small agricultural catchment in Switzerland. We tested to what degree predictions based on prior knowledge without local measurements could be improved upon relying on observed discharge. This learning process consisted of five steps: For the prior prediction (step 1), knowledge of the model parameters was coarse and predictions were fairly uncertain. In the second step, discharge data were used to update the prior parameter distribution. Effects of uncertainty in input data and model structure were accounted for by an autoregressive error model. This step decreased the width of the marginal distributions of parameters describing the lower boundary (percolation rates) but hardly affected soil hydraulic parameters. Residual analysis (step 3) revealed model structure deficits. We modified the model, and in the subsequent Bayesian updating (step 4) the widths of the posterior marginal distributions were reduced for most parameters compared to those of the prior. This incremental procedure led to a strong reduction in the uncertainty of the spatial prediction. Thus, despite only using spatially integrated data (discharge), the spatially distributed effect of the improved model structure can be expected to improve the spatially distributed predictions also. The fifth step consisted of a test with independent spatial data on herbicide losses and revealed ambiguous results. The comparison depended critically on the ratio of event to preevent water that was discharged. This ratio cannot be estimated from hydrological data only. The results demonstrate that the value of local data is strongly dependent on a correct model structure. An iterative procedure of Bayesian updating, model testing, and model modification is suggested.

  5. SATURATED PROPERTIES PREDICTION IN CRITICAL REGION ...

    African Journals Online (AJOL)

    Preferred Customer

    vapor pressure prediction and saturated volume prediction in vicinity of critical point. KEY WORDS. KEY WORDS: Equation of state, Saturated properties, ..... The AARD between experimental and calculated saturated vapor molar volume given by. Trebble [18] were 5.81, 5.34, 5.08, and 10.62 for SRK, PR, CCOR, and PT, ...

  6. System Predicts Critical Runway Performance Parameters

    Science.gov (United States)

    Millen, Ernest W.; Person, Lee H., Jr.

    1990-01-01

    Runway-navigation-monitor (RNM) and critical-distances-process electronic equipment designed to provide pilot with timely and reliable predictive navigation information relating to takeoff, landing and runway-turnoff operations. Enables pilot to make critical decisions about runway maneuvers with high confidence during emergencies. Utilizes ground-referenced position data only to drive purely navigational monitor system independent of statuses of systems in aircraft.

  7. Criticism and Counter-Criticism of Public Management: Strategy Models

    OpenAIRE

    Luis C. Ortigueira

    2007-01-01

    Critical control is very important in scientific management. This paper presents models of critical and counter-critical public-management strategies, focusing on the types of criticism and counter-criticism manifested in parliamentary political debates. The paper includes: (i) a normative model showing how rational criticism can be carried out; (ii) a normative model for oral critical intervention; and (iii) a general motivational strategy model for criticisms and counter-criticisms. The pap...

  8. Numerical prediction of critical heat flux in nuclear fuel rod bundles with advanced three-fluid multidimensional porous media based model

    International Nuclear Information System (INIS)

    Zoran Stosic; Vladimir Stevanovic

    2005-01-01

    Full text of publication follows: The modern design of nuclear fuel rod bundles for Boiling Water Reactors (BWRs) is characterised with increased number of rods in the bundle, introduced part-length fuel rods and a water channel positioned along the bundle asymmetrically in regard to the centre of the bundle cross section. Such design causes significant spatial differences of volumetric heat flux, steam void fraction distribution, mass flux rate and other thermal-hydraulic parameters important for efficient cooling of nuclear fuel rods during normal steady-state and transient conditions. The prediction of the Critical Heat Flux (CHF) under these complex thermal-hydraulic conditions is of the prime importance for the safe and economic BWR operation. An efficient numerical method for the CHF prediction is developed based on the porous medium concept and multi-fluid two-phase flow models. Fuel rod bundle is observed as a porous medium with a two-phase flow through it. Coolant flow from the bundle entrance to the exit is characterised with the subsequent change of one-phase and several two-phase flow patterns. One fluid (one-phase) model is used for the prediction of liquid heating up in the bundle entrance region. Two-fluid modelling approach is applied to the bubbly and churn-turbulent vapour and liquid flows. Three-fluid modelling approach is applied to the annular flow pattern: liquid film on the rods wall, steam flow and droplets entrained in the steam stream. Every fluid stream in applied multi-fluid models is described with the mass, momentum and energy balance equations. Closure laws for the prediction of interfacial transfer processes are stated with the special emphasis on the prediction of the steam-water interface drag force, through the interface drag coefficient, and droplets entrainment and deposition rates for three-fluid annular flow model. The model implies non-equilibrium thermal and flow conditions. A new mechanistic approach for the CHF prediction

  9. The Teacher as Model Critic

    Science.gov (United States)

    Feldman, Edmund B.

    1973-01-01

    Article is organized around five questions: what is art criticism, its relationship to education, its connection with art education, and can criticism be taught, and what does a teacher need in order to function as a model of criticism? (Author/RK)

  10. Advances in criticality predictions for EBR-II

    International Nuclear Information System (INIS)

    Schaefer, R.W.; Imel, G.R.

    1994-01-01

    Improvements to startup criticality predictions for the EBR-II reactor have been made. More exact calculational models, methods and data are now used, and better procedures for obtaining experimental data that enter into the prediction are in place. Accuracy improved by more than a factor of two and the largest ECP error observed since the changes is only 18 cents. An experimental method using subcritical counts is also being implemented

  11. Predicting recovery from acute kidney injury in critically ill patients

    DEFF Research Database (Denmark)

    Itenov, Theis S; Berthelsen, Rasmus Ehrenfried; Jensen, Jens-Ulrik

    2018-01-01

    these patients. DESIGN: Observational study with development and validation of a risk prediction model. SETTING: Nine academic ICUs in Denmark. PARTICIPANTS: Development cohort of critically ill patients with AKI at ICU admission from the Procalcitonin and Survival Study cohort (n = 568), validation cohort...

  12. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    refining formal, inductive predictive models is the quality of the archaeological and environmental data. To build models efficiently, relevant...geomorphology, and historic information . Lessons Learned: The original model was focused on the identification of prehistoric resources. This...system but uses predictive modeling informally . For example, there is no probability for buried archaeological deposits on the Burton Mesa, but there is

  13. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  14. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  15. Critical Features of Fragment Libraries for Protein Structure Prediction

    Science.gov (United States)

    dos Santos, Karina Baptista

    2017-01-01

    The use of fragment libraries is a popular approach among protein structure prediction methods and has proven to substantially improve the quality of predicted structures. However, some vital aspects of a fragment library that influence the accuracy of modeling a native structure remain to be determined. This study investigates some of these features. Particularly, we analyze the effect of using secondary structure prediction guiding fragments selection, different fragments sizes and the effect of structural clustering of fragments within libraries. To have a clearer view of how these factors affect protein structure prediction, we isolated the process of model building by fragment assembly from some common limitations associated with prediction methods, e.g., imprecise energy functions and optimization algorithms, by employing an exact structure-based objective function under a greedy algorithm. Our results indicate that shorter fragments reproduce the native structure more accurately than the longer. Libraries composed of multiple fragment lengths generate even better structures, where longer fragments show to be more useful at the beginning of the simulations. The use of many different fragment sizes shows little improvement when compared to predictions carried out with libraries that comprise only three different fragment sizes. Models obtained from libraries built using only sequence similarity are, on average, better than those built with a secondary structure prediction bias. However, we found that the use of secondary structure prediction allows greater reduction of the search space, which is invaluable for prediction methods. The results of this study can be critical guidelines for the use of fragment libraries in protein structure prediction. PMID:28085928

  16. Critical Features of Fragment Libraries for Protein Structure Prediction.

    Directory of Open Access Journals (Sweden)

    Raphael Trevizani

    Full Text Available The use of fragment libraries is a popular approach among protein structure prediction methods and has proven to substantially improve the quality of predicted structures. However, some vital aspects of a fragment library that influence the accuracy of modeling a native structure remain to be determined. This study investigates some of these features. Particularly, we analyze the effect of using secondary structure prediction guiding fragments selection, different fragments sizes and the effect of structural clustering of fragments within libraries. To have a clearer view of how these factors affect protein structure prediction, we isolated the process of model building by fragment assembly from some common limitations associated with prediction methods, e.g., imprecise energy functions and optimization algorithms, by employing an exact structure-based objective function under a greedy algorithm. Our results indicate that shorter fragments reproduce the native structure more accurately than the longer. Libraries composed of multiple fragment lengths generate even better structures, where longer fragments show to be more useful at the beginning of the simulations. The use of many different fragment sizes shows little improvement when compared to predictions carried out with libraries that comprise only three different fragment sizes. Models obtained from libraries built using only sequence similarity are, on average, better than those built with a secondary structure prediction bias. However, we found that the use of secondary structure prediction allows greater reduction of the search space, which is invaluable for prediction methods. The results of this study can be critical guidelines for the use of fragment libraries in protein structure prediction.

  17. Critical Features of Fragment Libraries for Protein Structure Prediction.

    Science.gov (United States)

    Trevizani, Raphael; Custódio, Fábio Lima; Dos Santos, Karina Baptista; Dardenne, Laurent Emmanuel

    2017-01-01

    The use of fragment libraries is a popular approach among protein structure prediction methods and has proven to substantially improve the quality of predicted structures. However, some vital aspects of a fragment library that influence the accuracy of modeling a native structure remain to be determined. This study investigates some of these features. Particularly, we analyze the effect of using secondary structure prediction guiding fragments selection, different fragments sizes and the effect of structural clustering of fragments within libraries. To have a clearer view of how these factors affect protein structure prediction, we isolated the process of model building by fragment assembly from some common limitations associated with prediction methods, e.g., imprecise energy functions and optimization algorithms, by employing an exact structure-based objective function under a greedy algorithm. Our results indicate that shorter fragments reproduce the native structure more accurately than the longer. Libraries composed of multiple fragment lengths generate even better structures, where longer fragments show to be more useful at the beginning of the simulations. The use of many different fragment sizes shows little improvement when compared to predictions carried out with libraries that comprise only three different fragment sizes. Models obtained from libraries built using only sequence similarity are, on average, better than those built with a secondary structure prediction bias. However, we found that the use of secondary structure prediction allows greater reduction of the search space, which is invaluable for prediction methods. The results of this study can be critical guidelines for the use of fragment libraries in protein structure prediction.

  18. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  19. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    utilities as partners and users. The new models are evaluated for five wind farms in Denmark as well as one wind farm in Spain. It is shown that the predictions based on conditional parametric models are superior to the predictions obatined by state-of-the-art parametric models.......This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Danish...

  20. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  1. A novel modeling to predict the critical current behavior of Nb3Sn PIT strand under transverse load based on a scaling law and Finite Element Analysis

    CERN Document Server

    Wang, Tiening; Takayasu, Makoto; Bordini, Bernardo

    2014-01-01

    Superconducting Nb3Sn Powder-In-Tube (PIT) strands could be used for the superconducting magnets of the next generation Large Hadron Collider. The strands are cabled into the typical flat Rutherford cable configuration. During the assembly of a magnet and its operation the strands experience not only longitudinal but also transverse load due to the pre-compression applied during the assembly and the Lorentz load felt when the magnets are energized. To properly design the magnets and guarantee their safe operation, mechanical load effects on the strand superconducting properties are studied extensively; particularly, many scaling laws based on tensile load experiments have been established to predict the critical current dependence on strain. However, the dependence of the superconducting properties on transverse load has not been extensively studied so far. One of the reasons is that transverse loading experiments are difficult to conduct due to the small diameter of the strand (about 1 mm) and the data curre...

  2. Predicting speech intelligibility in noise for hearing-critical jobs

    Science.gov (United States)

    Soli, Sigfrid D.; Laroche, Chantal; Giguere, Christian

    2003-10-01

    Many jobs require auditory abilities such as speech communication, sound localization, and sound detection. An employee for whom these abilities are impaired may constitute a safety risk for himself or herself, for fellow workers, and possibly for the general public. A number of methods have been used to predict these abilities from diagnostic measures of hearing (e.g., the pure-tone audiogram); however, these methods have not proved to be sufficiently accurate for predicting performance in the noise environments where hearing-critical jobs are performed. We have taken an alternative and potentially more accurate approach. A direct measure of speech intelligibility in noise, the Hearing in Noise Test (HINT), is instead used to screen individuals. The screening criteria are validated by establishing the empirical relationship between the HINT score and the auditory abilities of the individual, as measured in laboratory recreations of real-world workplace noise environments. The psychometric properties of the HINT enable screening of individuals with an acceptable amount of error. In this presentation, we will describe the predictive model and report the results of field measurements and laboratory studies used to provide empirical validation of the model. [Work supported by Fisheries and Oceans Canada.

  3. Prediction of critical thinking disposition based on mentoring among ...

    African Journals Online (AJOL)

    The results of study showed that there was a significantly positive correlation between Mentoring and Critical thinking disposition among faculty members. The findings showed that 67% of variance of critical thinking disposition was defined by predictive variables. The faculty members evaluated themselves in all mentoring ...

  4. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  5. To develop a regional ICU mortality prediction model during the first 24 h of ICU admission utilizing MODS and NEMS with six other independent variables from the Critical Care Information System (CCIS) Ontario, Canada.

    Science.gov (United States)

    Kao, Raymond; Priestap, Fran; Donner, Allan

    2016-01-01

    Intensive care unit (ICU) scoring systems or prediction models evolved to meet the desire of clinical and administrative leaders to assess the quality of care provided by their ICUs. The Critical Care Information System (CCIS) is province-wide data information for all Ontario, Canada level 3 and level 2 ICUs collected for this purpose. With the dataset, we developed a multivariable logistic regression ICU mortality prediction model during the first 24 h of ICU admission utilizing the explanatory variables including the two validated scores, Multiple Organs Dysfunctional Score (MODS) and Nine Equivalents Nursing Manpower Use Score (NEMS) followed by the variables age, sex, readmission to the ICU during the same hospital stay, admission diagnosis, source of admission, and the modified Charlson Co-morbidity Index (CCI) collected through the hospital health records. This study is a single-center retrospective cohort review of 8822 records from the Critical Care Trauma Centre (CCTC) and Medical-Surgical Intensive Care Unit (MSICU) of London Health Sciences Centre (LHSC), Ontario, Canada between 1 Jan 2009 to 30 Nov 2012. Multivariable logistic regression on training dataset (n = 4321) was used to develop the model and validate by bootstrapping method on the testing dataset (n = 4501). Discrimination, calibration, and overall model performance were also assessed. The predictors significantly associated with ICU mortality included: age (p  0.31). The overall optimism of the estimation between the training and testing data set ΔAUC = 0.003, indicating a stable prediction model. This study demonstrates that CCIS data available after the first 24 h of ICU admission at LHSC can be used to create a robust mortality prediction model with acceptable fit statistic and internal validity for valid benchmarking and monitoring ICU performance.

  6. Predictive models for arteriovenous fistula maturation.

    Science.gov (United States)

    Al Shakarchi, Julien; McGrogan, Damian; Van der Veer, Sabine; Sperrin, Matthew; Inston, Nicholas

    2016-05-07

    Haemodialysis (HD) is a lifeline therapy for patients with end-stage renal disease (ESRD). A critical factor in the survival of renal dialysis patients is the surgical creation of vascular access, and international guidelines recommend arteriovenous fistulas (AVF) as the gold standard of vascular access for haemodialysis. Despite this, AVFs have been associated with high failure rates. Although risk factors for AVF failure have been identified, their utility for predicting AVF failure through predictive models remains unclear. The objectives of this review are to systematically and critically assess the methodology and reporting of studies developing prognostic predictive models for AVF outcomes and assess them for suitability in clinical practice. Electronic databases were searched for studies reporting prognostic predictive models for AVF outcomes. Dual review was conducted to identify studies that reported on the development or validation of a model constructed to predict AVF outcome following creation. Data were extracted on study characteristics, risk predictors, statistical methodology, model type, as well as validation process. We included four different studies reporting five different predictive models. Parameters identified that were common to all scoring system were age and cardiovascular disease. This review has found a small number of predictive models in vascular access. The disparity between each study limits the development of a unified predictive model.

  7. Safety prediction for basic components of safety-critical software based on static testing

    International Nuclear Information System (INIS)

    Son, H.S.; Seong, P.H.

    2000-01-01

    The purpose of this work is to develop a safety prediction method, with which we can predict the risk of software components based on static testing results at the early development stage. The predictive model combines the major factor with the quality factor for the components, which are calculated based on the measures proposed in this work. The application to a safety-critical software system demonstrates the feasibility of the safety prediction method. (authors)

  8. Safety prediction for basic components of safety critical software based on static testing

    International Nuclear Information System (INIS)

    Son, H.S.; Seong, P.H.

    2001-01-01

    The purpose of this work is to develop a safety prediction method, with which we can predict the risk of software components based on static testing results at the early development stage. The predictive model combines the major factor with the quality factor for the components, both of which are calculated based on the measures proposed in this work. The application to a safety-critical software system demonstrates the feasibility of the safety prediction method. (authors)

  9. Predictions of the marviken subcooled critical mass flux using the critical flow scaling parameters

    Energy Technology Data Exchange (ETDEWEB)

    Park, Choon Kyung; Chun, Se Young; Cho, Seok; Yang, Sun Ku; Chung, Moon Ki [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    A total of 386 critical flow data points from 19 runs of 27 runs in the Marviken Test were selected and compared with the predictions by the correlations based on the critical flow scaling parameters. The results show that the critical mass flux in the very large diameter pipe can be also characterized by two scaling parameters such as discharge coefficient and dimensionless subcooling (C{sub d,ref} and {Delta}{Tau}{sup *} {sub sub}). The agreement between the measured data and the predictions are excellent. 8 refs., 8 figs. 1 tab. (Author)

  10. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  11. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  12. Critical evidence for the prediction error theory in associative learning.

    Science.gov (United States)

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  13. Critical review of precompound models

    International Nuclear Information System (INIS)

    Jahn, H.

    1984-01-01

    It is shown that the desired predictive capability of much of the commonly used precompound formalism to calculate nuclear reaction cross-sections is seriously reduced by too much arbitrariness of the choice of parameters. The origin of this arbitrariness is analysed in detail and improvements or alternatives are discussed. (author)

  14. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation......, then rival strategies can still be compared based on repeated bootstraps of the same data. Often, however, the overall performance of rival strategies is similar and it is thus difficult to decide for one model. Here, we investigate the variability of the prediction models that results when the same...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  15. Consideration of critical heat flux margin prediction by subcooled or low quality critical heat flux correlations

    International Nuclear Information System (INIS)

    Hejzlar, P.; Todreas, N.E.

    1996-01-01

    The accurate prediction of the critical heat flux (CHF) margin which is a key design parameter in a variety of cooling and heating systems is of high importance. These margins are, for the low quality region, typically expressed in terms of critical heat flux ratios using the direct substitution method. Using a simple example of a heated tube, it is shown that CHF correlations of a certain type often used to predict CHF margins, expressed in this manner, may yield different results, strongly dependent on the correlation in use. It is argued that the application of the heat balance method to such correlations, which leads to expressing the CHF margins in terms of the critical power ratio, may be more appropriate. (orig.)

  16. Developing neuronal networks: self-organized criticality predicts the future.

    Science.gov (United States)

    Pu, Jiangbo; Gong, Hui; Li, Xiangning; Luo, Qingming

    2013-01-01

    Self-organized criticality emerged in neural activity is one of the key concepts to describe the formation and the function of developing neuronal networks. The relationship between critical dynamics and neural development is both theoretically and experimentally appealing. However, whereas it is well-known that cortical networks exhibit a rich repertoire of activity patterns at different stages during in vitro maturation, dynamical activity patterns through the entire neural development still remains unclear. Here we show that a series of metastable network states emerged in the developing and "aging" process of hippocampal networks cultured from dissociated rat neurons. The unidirectional sequence of state transitions could be only observed in networks showing power-law scaling of distributed neuronal avalanches. Our data suggest that self-organized criticality may guide spontaneous activity into a sequential succession of homeostatically-regulated transient patterns during development, which may help to predict the tendency of neural development at early ages in the future.

  17. Critical shoulder angle combined with age predict five shoulder pathologies: a retrospective analysis of 1000 cases.

    Science.gov (United States)

    Heuberer, Philipp R; Plachel, Fabian; Willinger, Lukas; Moroder, Philipp; Laky, Brenda; Pauzenberger, Leo; Lomoschitz, Fritz; Anderl, Werner

    2017-06-15

    Acromial morphology has previously been defined as a risk factor for some shoulder pathologies. Yet, study results are inconclusive and not all major shoulder diseases have been sufficiently investigated. Thus, the aim of the present study was to analyze predictive value of three radiological parameters including the critical shoulder angle, acromion index, and lateral acromion angle in relationship to symptomatic patients with either cuff tear arthropathy, glenohumeral osteoarthritis, rotator cuff tear, impingement, and tendinitis calcarea. A total of 1000 patients' standardized true-anteroposterior radiographs were retrospectively assessed. Receiver-operating curve analyses and multinomial logistic regression were used to examine the association between shoulder pathologies and acromion morphology. The prediction model was derived from a development cohort and applied to a validation cohort. Prediction model's performance was statistically evaluated. The majority of radiological measurements were significantly different between shoulder pathologies, but the critical shoulder angle was an overall better parameter to predict and distinguish between the different pathologies than the acromion index or lateral acromion angle. Typical critical shoulder angle-age patterns for the different shoulder pathologies could be detected. Patients diagnosed with rotator cuff tears had the highest, whereas patients with osteoarthritis had the lowest critical shoulder angle. The youngest patients were in the tendinitis calcarea and the oldest in the cuff tear arthropathy group. The present study showed that critical shoulder angle and age, two easily assessable variables, adequately predict different shoulder pathologies in patients with shoulder complaints.

  18. Safety-Critical Java on a Time-predictable Processor

    DEFF Research Database (Denmark)

    Korsholm, Stephan Erbs; Schoeberl, Martin; Puffitsch, Wolfgang

    2015-01-01

    For real-time systems the whole execution stack needs to be time-predictable and analyzable for the worst-case execution time (WCET). This paper presents a time-predictable platform for safety-critical Java. The platform consists of (1) the Patmos processor, which is a time-predictable processor......; (2) a C compiler for Patmos with support for WCET analysis; (3) the HVM, which is a Java-to-C compiler; (4) the HVM-SCJ implementation which supports SCJ Level 0, 1, and 2 (for both single and multicore platforms); and (5) a WCET analysis tool. We show that real-time Java programs translated to C...... and compiled to a Patmos binary can be analyzed by the AbsInt aiT WCET analysis tool. To the best of our knowledge the presented system is the second WCET analyzable real-time Java system; and the first one on top of a RISC processor....

  19. Safety-critical Java on a time-predictable processor

    DEFF Research Database (Denmark)

    Korsholm, Stephan E.; Schoeberl, Martin; Puffitsch, Wolfgang

    2015-01-01

    For real-time systems the whole execution stack needs to be time-predictable and analyzable for the worst-case execution time (WCET). This paper presents a time-predictable platform for safety-critical Java. The platform consists of (1) the Patmos processor, which is a time-predictable processor......; (2) a C compiler for Patmos with support for WCET analysis; (3) the HVM, which is a Java-to-C compiler; (4) the HVM-SCJ implementation which supports SCJ Level 0, 1, and 2 (for both single and multicore platforms); and (5) a WCET analysis tool. We show that real-time Java programs translated to C...... and compiled to a Patmos binary can be analyzed by the AbsInt aiT WCET analysis tool. To the best of our knowledge the presented system is the second WCET analyzable real-time Java system; and the first one on top of a RISC processor....

  20. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  1. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  2. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  3. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  4. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... the performance of HIRLAM in particular with respect to wind predictions. To estimate the performance of the model two spatial resolutions (0,5 Deg. and 0.2 Deg.) and different sets of HIRLAM variables were used to predict wind speed and energy production. The predictions of energy production for the wind farms...... are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production...

  5. Predicting critical heat flux in slug flow regime of uniformly heated ...

    African Journals Online (AJOL)

    Numerical computation code (PWR-DNBP) has been developed to predict Critical Heat Flux (CHF) of forced convective flow of water in a vertical heated channel. The code was based on the liquid sub-layer model, with the assumption that CHF occurred when the liquid film thickness between the heated surface and vapour ...

  6. Are animal models predictive for humans?

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2009-01-01

    Full Text Available Abstract It is one of the central aims of the philosophy of science to elucidate the meanings of scientific terms and also to think critically about their application. The focus of this essay is the scientific term predict and whether there is credible evidence that animal models, especially in toxicology and pathophysiology, can be used to predict human outcomes. Whether animals can be used to predict human response to drugs and other chemicals is apparently a contentious issue. However, when one empirically analyzes animal models using scientific tools they fall far short of being able to predict human responses. This is not surprising considering what we have learned from fields such evolutionary and developmental biology, gene regulation and expression, epigenetics, complexity theory, and comparative genomics.

  7. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... Linear MPC. 1. Uses linear model: ˙x = Ax + Bu. 2. Quadratic cost function: F = xT Qx + uT Ru. 3. Linear constraints: Hx + Gu < 0. 4. Quadratic program. Nonlinear MPC. 1. Nonlinear model: ˙x = f(x, u). 2. Cost function can be nonquadratic: F = (x, u). 3. Nonlinear constraints: h(x, u) < 0. 4. Nonlinear program.

  8. A critical review of clarifier modelling

    DEFF Research Database (Denmark)

    Plósz, Benedek; Nopens, Ingmar; Rieger, Leiv

    This outline paper aims to provide a critical review of secondary settling tank (SST) modelling approaches used in current wastewater engineering and develop tools not yet applied in practice. We address the development of different tier models and experimental techniques in the field with a part...

  9. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  10. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  11. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  12. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  13. Can Student Nurse Critical Thinking Be Predicted from Perceptions of Structural Empowerment within the Undergraduate, Pre-Licensure Learning Environment?

    Science.gov (United States)

    Caswell-Moore, Shelley P.

    2013-01-01

    The purpose of this study was to test a model using Rosabeth Kanter's theory (1977; 1993) of structural empowerment to determine if this model can predict student nurses' level of critical thinking. Major goals of nursing education are to cultivate graduates who can think critically with a keen sense of clinical judgment, and who can perform…

  14. A New Energy-Critical Plane Damage Parameter for Multiaxial Fatigue Life Prediction of Turbine Blades

    Directory of Open Access Journals (Sweden)

    Zheng-Yong Yu

    2017-05-01

    Full Text Available As one of fracture critical components of an aircraft engine, accurate life prediction of a turbine blade to disk attachment is significant for ensuring the engine structural integrity and reliability. Fatigue failure of a turbine blade is often caused under multiaxial cyclic loadings at high temperatures. In this paper, considering different failure types, a new energy-critical plane damage parameter is proposed for multiaxial fatigue life prediction, and no extra fitted material constants will be needed for practical applications. Moreover, three multiaxial models with maximum damage parameters on the critical plane are evaluated under tension-compression and tension-torsion loadings. Experimental data of GH4169 under proportional and non-proportional fatigue loadings and a case study of a turbine disk-blade contact system are introduced for model validation. Results show that model predictions by Wang-Brown (WB and Fatemi-Socie (FS models with maximum damage parameters are conservative and acceptable. For the turbine disk-blade contact system, both of the proposed damage parameters and Smith-Watson-Topper (SWT model show reasonably acceptable correlations with its field number of flight cycles. However, life estimations of the turbine blade reveal that the definition of the maximum damage parameter is not reasonable for the WB model but effective for both the FS and SWT models.

  15. Predictions models with neural nets

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2008-01-01

    Full Text Available The contribution is oriented to basic problem trends solution of economic pointers, using neural networks. Problems include choice of the suitable model and consequently configuration of neural nets, choice computational function of neurons and the way prediction learning. The contribution contains two basic models that use structure of multilayer neural nets and way of determination their configuration. It is postulate a simple rule for teaching period of neural net, to get most credible prediction.Experiments are executed with really data evolution of exchange rate Kč/Euro. The main reason of choice this time series is their availability for sufficient long period. In carry out of experiments the both given basic kind of prediction models with most frequent use functions of neurons are verified. Achieve prediction results are presented as in numerical and so in graphical forms.

  16. Posterior predictive checking of multiple imputation models.

    Science.gov (United States)

    Nguyen, Cattram D; Lee, Katherine J; Carlin, John B

    2015-07-01

    Multiple imputation is gaining popularity as a strategy for handling missing data, but there is a scarcity of tools for checking imputation models, a critical step in model fitting. Posterior predictive checking (PPC) has been recommended as an imputation diagnostic. PPC involves simulating "replicated" data from the posterior predictive distribution of the model under scrutiny. Model fit is assessed by examining whether the analysis from the observed data appears typical of results obtained from the replicates produced by the model. A proposed diagnostic measure is the posterior predictive "p-value", an extreme value of which (i.e., a value close to 0 or 1) suggests a misfit between the model and the data. The aim of this study was to evaluate the performance of the posterior predictive p-value as an imputation diagnostic. Using simulation methods, we deliberately misspecified imputation models to determine whether posterior predictive p-values were effective in identifying these problems. When estimating the regression parameter of interest, we found that more extreme p-values were associated with poorer imputation model performance, although the results highlighted that traditional thresholds for classical p-values do not apply in this context. A shortcoming of the PPC method was its reduced ability to detect misspecified models with increasing amounts of missing data. Despite the limitations of posterior predictive p-values, they appear to have a valuable place in the imputer's toolkit. In addition to automated checking using p-values, we recommend imputers perform graphical checks and examine other summaries of the test quantity distribution. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Critically Important Object Security System Element Model

    Directory of Open Access Journals (Sweden)

    I. V. Khomyackov

    2012-03-01

    Full Text Available A stochastic model of critically important object security system element has been developed. The model includes mathematical description of the security system element properties and external influences. The state evolution of the security system element is described by the semi-Markov process with finite states number, the semi-Markov matrix and the initial semi-Markov process states probabilities distribution. External influences are set with the intensity of the Poisson thread.

  18. The high temperature Ising model is a critical percolation model

    NARCIS (Netherlands)

    Meester, R.W.J.; Camia, F.; Balint, A.

    2010-01-01

    We define a new percolation model by generalising the FK representation of the Ising model, and show that on the triangular lattice and at high temperatures, the critical point in the new model corresponds to the Ising model. Since the new model can be viewed as Bernoulli percolation on a random

  19. Critical Review of Membrane Bioreactor Models

    DEFF Research Database (Denmark)

    Naessens, W.; Maere, T.; Ratkovich, Nicolas Rios

    2012-01-01

    modelling. In this paper, the vast literature on hydrodynamic and integrated modelling in MBR is critically reviewed. Hydrodynamic models are used at different scales and focus mainly on fouling and only little on system design/optimisation. Integrated models also focus on fouling although the ones......Membrane bioreactor technology exists for a couple of decades, but has not yet overwhelmed the market due to some serious drawbacks of which operational cost due to fouling is the major contributor. Knowledge buildup and optimisation for such complex systems can heavily benefit from mathematical...

  20. On the criticality of inferred models

    International Nuclear Information System (INIS)

    Mastromatteo, Iacopo; Marsili, Matteo

    2011-01-01

    Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality

  1. Critically Tapered Wedges and Critical State Soil Mechanics: Porosity-based Pressure Prediction in the Nankai Accretionary Prism.

    Science.gov (United States)

    Flemings, P. B.; Saffer, D. M.

    2016-12-01

    We predict pore pressure from porosity measurements at ODP Sites 1174 and 808 in the Nankai Accretionary prism, offshore Japan. For a range of friction angles (5-30 degrees), we estimate that the pore pressure ratio (λ*) ranges from 0.5 to 0.8: the pore pressure supports 50% to 80% of the overburden. Higher friction angles result in higher pressures. For the majority of the scenarios, pressures within the prism parallel the lithostat and are greater than the pressures beneath it. Our results support previous qualitative interpretations at Nankai and elsewhere suggesting that lower porosity above the décollement than below reflects higher mean effective stress there. By coupling a critical state soil model (Modified Cam Clay), which describes porosity as a function of mean and deviator stress, with a stress model that considers the difference in stress states above and below the décollement, we quantitatively show that the prism porosities record significant overpressure despite their lower porosity. As the soil is consumed by the advancing prism, changes in both mean and shear stress drive overpressure generation. Even in the extreme case where only change in mean stress is considered (a vertical end cap model), significant overpressures are generated. The high pressures we predict require an effective friction coefficient (µb') at the décollement of 0.023-0.038. Assuming that the pore pressure at the décollement lies between the values we report for the wedge and the underthrusting sediments, these effective friction coefficients correspond to intrinsic friction coefficients of µb= 0.08-0.38 (f = 4.6 - 21°). These values are comparable to friction coefficients of 0.1-0.4 reported for clay-dominated fault zones in a wide range of settings. By coupling the critical wedge model with an appropriate constitutive model, we present a systematic approach to predict pressure in thrust systems.

  2. Differential scanning calorimetry predicts the critical quality attributes of amorphous glibenclamide

    DEFF Research Database (Denmark)

    Mah, Pei T; Laaksonen, Timo; Rades, Thomas

    2015-01-01

    Selection of a crystallinity detection tool that is able to predict the critical quality attributes of amorphous formulations is imperative for the development of process control strategies. The main aim of this study was to determine the crystallinity detection tool that best predicts the critical...... quality attributes (i.e. physical stability and dissolution behaviour) of amorphous material. Glibenclamide (model drug) was milled for various durations using a planetary mill and characterised using Raman spectroscopy and differential scanning calorimetry (DSC). Physical stability studies upon storage...... and plateaus were reached after milling for certain periods of time (physical stability - 150min; dissolution - 120min). The residual crystallinity which was detectable with DSC (onset of crystallisation), but not with Raman spectroscopy, adversely affected the critical quality attributes of milled...

  3. Prediction of critical heat flux by a new local condition hypothesis

    International Nuclear Information System (INIS)

    Im, J. H.; Jun, K. D.; Sim, J. W.; Deng, Zhijian

    1998-01-01

    Critical Heat Flux(CHF) was predicted for uniformly heated vertical round tube by a new local condition hypothesis which incorporates a local true steam quality. This model successfully overcame the difficulties in predicted the subcooled and quality CHF by the thermodynamic equilibrium quality. The local true steam quality is a dependent variable of the thermodynamic equilibrium quality at the exit and the quality at the Onset of Significant Vaporization(OSV). The exit thermodynamic equilibrium quality was obtained from the heat balance, and the quality at OSV was obtained from the Saha-Zuber correlation. In the past CHF has been predicted by the experimental correlation based on local or non-local condition hypothesis. This preliminary study showed that all the available world data on uniform CHF could be predicted by the model based on the local condition hypothesis

  4. An improved statistical analysis for predicting the critical temperature and critical density with Gibbs ensemble Monte Carlo simulation.

    Science.gov (United States)

    Messerly, Richard A; Rowley, Richard L; Knotts, Thomas A; Wilding, W Vincent

    2015-09-14

    A rigorous statistical analysis is presented for Gibbs ensemble Monte Carlo simulations. This analysis reduces the uncertainty in the critical point estimate when compared with traditional methods found in the literature. Two different improvements are recommended due to the following results. First, the traditional propagation of error approach for estimating the standard deviations used in regression improperly weighs the terms in the objective function due to the inherent interdependence of the vapor and liquid densities. For this reason, an error model is developed to predict the standard deviations. Second, and most importantly, a rigorous algorithm for nonlinear regression is compared to the traditional approach of linearizing the equations and propagating the error in the slope and the intercept. The traditional regression approach can yield nonphysical confidence intervals for the critical constants. By contrast, the rigorous algorithm restricts the confidence regions to values that are physically sensible. To demonstrate the effect of these conclusions, a case study is performed to enhance the reliability of molecular simulations to resolve the n-alkane family trend for the critical temperature and critical density.

  5. Critical assessment of methods of protein structure prediction (CASP)-round IX

    KAUST Repository

    Moult, John

    2011-01-01

    This article is an introduction to the special issue of the journal PROTEINS, dedicated to the ninth Critical Assessment of Structure Prediction (CASP) experiment to assess the state of the art in protein structure modeling. The article describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. Methods for modeling protein structure continue to advance, although at a more modest pace than in the early CASP experiments. CASP developments of note are indications of improvement in model accuracy for some classes of target, an improved ability to choose the most accurate of a set of generated models, and evidence of improvement in accuracy for short "new fold" models. In addition, a new analysis of regions of models not derivable from the most obvious template structure has revealed better performance than expected.

  6. Critical analysis of algebraic collective models

    International Nuclear Information System (INIS)

    Moshinsky, M.

    1986-01-01

    The author shall understand by algebraic collective models all those based on specific Lie algebras, whether the latter are suggested through simple shell model considerations like in the case of the Interacting Boson Approximation (IBA), or have a detailed microscopic foundation like the symplectic model. To analyze these models critically, it is convenient to take a simple conceptual example of them in which all steps can be implemented analytically or through elementary numerical analysis. In this note he takes as an example the symplectic model in a two dimensional space i.e. based on a sp(4,R) Lie algebra, and show how through its complete discussion we can get a clearer understanding of the structure of algebraic collective models of nuclei. In particular he discusses the association of Hamiltonians, related to maximal subalgebras of our basic Lie algebra, with specific types of spectra, and the connections between spectra and shapes

  7. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  8. What do saliency models predict?

    Science.gov (United States)

    Koehler, Kathryn; Guo, Fei; Zhang, Sheng; Eckstein, Miguel P.

    2014-01-01

    Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments about the most salient image region. Other groups of observers performed a free viewing task, saliency search task, or cued object search task. Behavior on the popular free viewing task was not best predicted by standard saliency models. Instead, the models most accurately predicted the explicit saliency selections and eye movements made while performing saliency judgments. Observers' fixations varied similarly across images for the saliency and free viewing tasks, suggesting that these two tasks are related. The variability of observers' eye movements was modulated by the task (lowest for the object search task and greatest for the free viewing and saliency search tasks) as well as the clutter content of the images. Eye movement variability in saliency search and free viewing might be also limited by inherent variation of what observers consider salient. Our results contribute to understanding the tasks and behavioral measures for which saliency models are best suited as predictors of human behavior, the relationship across various perceptual tasks, and the factors contributing to observer variability in fixational eye movements. PMID:24618107

  9. Critical review of glass performance modeling

    International Nuclear Information System (INIS)

    Bourcier, W.L.

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process

  10. Critical review of glass performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process.

  11. Posterior Predictive Model Checking for Multidimensionality in Item Response Theory

    Science.gov (United States)

    Levy, Roy; Mislevy, Robert J.; Sinharay, Sandip

    2009-01-01

    If data exhibit multidimensionality, key conditional independence assumptions of unidimensional models do not hold. The current work pursues posterior predictive model checking, a flexible family of model-checking procedures, as a tool for criticizing models due to unaccounted for dimensions in the context of item response theory. Factors…

  12. Lung Injury Prediction Score Is Useful in Predicting Acute Respiratory Distress Syndrome and Mortality in Surgical Critical Care Patients

    Directory of Open Access Journals (Sweden)

    Zachary M. Bauman

    2015-01-01

    Full Text Available Background. Lung injury prediction score (LIPS is valuable for early recognition of ventilated patients at high risk for developing acute respiratory distress syndrome (ARDS. This study analyzes the value of LIPS in predicting ARDS and mortality among ventilated surgical patients. Methods. IRB approved, prospective observational study including all ventilated patients admitted to the surgical intensive care unit at a single tertiary center over 6 months. ARDS was defined using the Berlin criteria. LIPS were calculated for all patients and analyzed. Logistic regression models evaluated the ability of LIPS to predict development of ARDS and mortality. A receiver operator characteristic (ROC curve demonstrated the optimal LIPS value to statistically predict development of ARDS. Results. 268 ventilated patients were observed; 141 developed ARDS and 127 did not. The average LIPS for patients who developed ARDS was 8.8±2.8 versus 5.4±2.8 for those who did not (p<0.001. An ROC area under the curve of 0.79 demonstrates LIPS is statistically powerful for predicting ARDS development. Furthermore, for every 1-unit increase in LIPS, the odds of developing ARDS increase by 1.50 (p<0.001 and odds of ICU mortality increase by 1.22 (p<0.001. Conclusion. LIPS is reliable for predicting development of ARDS and predicting mortality in critically ill surgical patients.

  13. A Novel Method for the Prediction of Critical Inclusion Size Leading to Fatigue Failure

    Science.gov (United States)

    Saberifar, S.; Mashreghi, A. R.

    2012-06-01

    The fatigue behavior of two commercial 30MnVS6 steels with similar microstructure and mechanical properties containing inclusions of different sizes were studied in the 107 cycles fatigue regime. The scanning electron microscopy (SEM) investigations of the fracture surfaces revealed that the nonmetallic inclusions are the main sources of fatigue crack initiation. Calculated according to the Murakami's model, the stress intensity factors were found to be suitable for the assessment of fatigue behavior. In this article, a new method is proposed for the prediction of the critical inclusion size, using Murakami's model. According to this method, a critical stress intensity factor was determined for the estimation of the critical inclusion size causing the fatigue failure.

  14. Detecting, anticipating, and predicting critical transitions in spatially extended systems.

    Science.gov (United States)

    Kwasniok, Frank

    2018-03-01

    A data-driven linear framework for detecting, anticipating, and predicting incipient bifurcations in spatially extended systems based on principal oscillation pattern (POP) analysis is discussed. The dynamics are assumed to be governed by a system of linear stochastic differential equations which is estimated from the data. The principal modes of the system together with corresponding decay or growth rates and oscillation frequencies are extracted as the eigenvectors and eigenvalues of the system matrix. The method can be applied to stationary datasets to identify the least stable modes and assess the proximity to instability; it can also be applied to nonstationary datasets using a sliding window approach to track the changing eigenvalues and eigenvectors of the system. As a further step, a genuinely nonstationary POP analysis is introduced. Here, the system matrix of the linear stochastic model is time-dependent, allowing for extrapolation and prediction of instabilities beyond the learning data window. The methods are demonstrated and explored using the one-dimensional Swift-Hohenberg equation as an example, focusing on the dynamics of stochastic fluctuations around the homogeneous stable state prior to the first bifurcation. The POP-based techniques are able to extract and track the least stable eigenvalues and eigenvectors of the system; the nonstationary POP analysis successfully predicts the timing of the first instability and the unstable mode well beyond the learning data window.

  15. Modelling critical NDVI curves in perennial ryegrass

    DEFF Research Database (Denmark)

    Gislum, R; Boelt, B

    2010-01-01

      The use of optical sensors to measure canopy reflectance and calculate crop index as e.g. normalized difference vegetation index (NDVI) is widely used in agricultural crops, but has so far not been implemented in herbage seed production. The present study has the purpose to develop a critical...... NDVI curve where the critical NDVI, defined as the minimum NDVI obtained to achieve a high seed yield, will be modelled during the growing season. NDVI measurements were made at different growing degree days (GDD) in a three year field experiment where different N application rates were applied....... There was a clear maximum in the correlation coefficient between seed yield and NDVI in the period from approximately 700 to 900 GDD. At this time there was an exponential relationship between NDVI and seed yield where highest seed yield were at NDVI ~0.9. Theoretically the farmers should aim for an NDVI of 0...

  16. The Biomantle-Critical Zone Model

    Science.gov (United States)

    Johnson, D. L.; Lin, H.

    2006-12-01

    It is a fact that established fields, like geomorphology, soil science, and pedology, which treat near surface and surface processes, are undergoing conceptual changes. Disciplinary self examinations are rife. New practitioners are joining these fields, bringing novel and interdisciplinary ideas. Such new names as "Earth's critical zone," "near surface geophysics," and "weathering engine" are being coined for research groups. Their agendas reflect an effort to integrate and reenergize established fields and break new ground. The new discipline "hydropedology" integrates soil science with hydrologic principles, and recent biodynamic investigations have spawned "biomantle" concepts and principles. One force behind these sea shifts may be retrospectives whereby disciplines periodically re-invent themselves to meet new challenges. Such retrospectives may be manifest in the recent Science issue on "Soils, The Final Frontier" (11 June, 2004), and in recent National Research Council reports that have set challenges to science for the next three decades (Basic Research Opportunities in Earth Science, and Grand Challenges for the Environmental Sciences, both published in 2001). In keeping with such changes, we advocate the integration of biomantle and critical zone concepts into a general model of Earth's soil. (The scope of the model automatically includes the domain of hydropedology.) Our justification is that the integration makes for a more appealing holistic, and realistic, model for the domain of Earth's soil at any scale. The focus is on the biodynamics of the biomantle and water flow within the critical zone. In this general model the biomantle is the epidermis of the critical zone, which extends to the base of the aquifer. We define soil as the outer layer of landforms on planets and similar bodies altered by biological, chemical, and/or physical agents. Because Earth is the only planet with biological agents, as far as we know, it is the only one that has all

  17. The effect of virtual mass on the prediction of critical flow

    International Nuclear Information System (INIS)

    Cheng, L.; Lahey, R.T.; Drew, D.A.

    1983-01-01

    By observing the results in Fig. 4 and Fig. 5 we can see that virtual mass effects are important in predicting critical flow. However, as seen in Fig. 7a, in which all three flows are predicted to be critical (Δ=0), it is difficult to distinguish one set of conditions from the other by just considering the pressure profile. Clearly more detailed data, such as the throat void fraction, is needed for discrimination between these calculations. Moreover, since the calculated critical flows have been found to be sensitive to initial mass flux, and void fraction, careful measurements of those parameters are needed before accurate virtual mass parameters can be determined from these data. It can be concluded that the existing Moby Dick data is inadequate to allow one to deduce accurate values of the virtual mass parameters C/sub VM/ and λ. Nevertheless, more careful experiments of this type are uniquely suited for the determination of these important parameters. It appears that the use of a nine equation model, such as that discussed herein, coupled with more detailed accurate critical flow data is an effective means of determining the parameters in interfacial momentum transfer models, such as virtual mass effects, which are only important during strong spatial accelerations. Indeed, there are few other methods available which can be used for such determinations

  18. Prediction of aqueous solubility, vapor pressure and critical micelle concentration for aquatic partitioning of perfluorinated chemicals.

    Science.gov (United States)

    Bhhatarai, Barun; Gramatica, Paola

    2011-10-01

    The majority of perfluorinated chemicals (PFCs) are of increasing risk to biota and environment due to their physicochemical stability, wide transport in the environment and difficulty in biodegradation. It is necessary to identify and prioritize these harmful PFCs and to characterize their physicochemical properties that govern the solubility, distribution and fate of these chemicals in an aquatic ecosystem. Therefore, available experimental data (10-35 compounds) of three important properties: aqueous solubility (AqS), vapor pressure (VP) and critical micelle concentration (CMC) on per- and polyfluorinated compounds were collected for quantitative structure-property relationship (QSPR) modeling. Simple and robust models based on theoretical molecular descriptors were developed and externally validated for predictivity. Model predictions on selected PFCs were compared with available experimental data and other published in silico predictions. The structural applicability domains (AD) of the models were verified on a bigger data set of 221 compounds. The predicted properties of the chemicals that are within the AD, are reliable, and they help to reduce the wide data gap that exists. Moreover, the predictions of AqS, VP, and CMC of most common PFCs were evaluated to understand the aquatic partitioning and to derive a relation with the available experimental data of bioconcentration factor (BCF).

  19. Assessment of ASSERT-PV for prediction of critical heat flux in CANDU bundles

    International Nuclear Information System (INIS)

    Rao, Y.F.; Cheng, Z.; Waddington, G.M.

    2014-01-01

    Highlights: • Assessment of the new Canadian subchannel code ASSERT-PV 3.2 for CHF prediction. • CANDU 28-, 37- and 43-element bundle CHF experiments. • Prediction improvement of ASSERT-PV 3.2 over previous code versions. • Sensitivity study of the effect of CHF model options. - Abstract: Atomic Energy of Canada Limited (AECL) has developed the subchannel thermalhydraulics code ASSERT-PV for the Canadian nuclear industry. The recently released ASSERT-PV 3.2 provides enhanced models for improved predictions of flow distribution, critical heat flux (CHF), and post-dryout (PDO) heat transfer in horizontal CANDU fuel channels. This paper presents results of an assessment of the new code version against five full-scale CANDU bundle experiments conducted in 1990s and in 2009 by Stern Laboratories (SL), using 28-, 37- and 43-element (CANFLEX) bundles. A total of 15 CHF test series with varying pressure-tube creep and/or bearing-pad height were analyzed. The SL experiments encompassed the bundle geometries and range of flow conditions for the intended ASSERT-PV applications for CANDU reactors. Code predictions of channel dryout power and axial and radial CHF locations were compared against measurements from the SL CHF tests to quantify the code prediction accuracy. The prediction statistics using the recommended model set of ASSERT-PV 3.2 were compared to those from previous code versions. Furthermore, the sensitivity studies evaluated the contribution of each CHF model change or enhancement to the improvement in CHF prediction. Overall, the assessment demonstrated significant improvement in prediction of channel dryout power and axial and radial CHF locations in horizontal fuel channels containing CANDU bundles

  20. Model of designating the critical damages

    Directory of Open Access Journals (Sweden)

    Zwolińska Bożena

    2017-06-01

    Full Text Available The article consists of two parts which make for an integral body. This article depicts the method of designating the critical damages in accordance with lean maintenance method. Author considered exemplary production system (serial-parallel in which in time Δt appeared a damage on three different objects. Article depicts the mathematical model which enables determination of an indicator called “prioritized digit of the device”. In the developed model there were considered some parameters: production abilities of devices, existence of potential vicarious devices, position of damage in the production stream based on the capacity of operational buffers, time needed to remove the damages and influence of damages to the finalization of customers’ orders – CEF indicator.

  1. Comparison of APACHE II and SAPS II Scoring Systems in Prediction of Critically ill Patients’ Outcome

    Directory of Open Access Journals (Sweden)

    Hamed Aminiahidashti

    2017-01-01

    Full Text Available Introduction: Using physiologic scoring systems for identifying high-risk patients for mortality has been considered recently. This study was designed to evaluate the values of Acute Physiology and Chronic Health Evaluation II (APACHE II and Simplified Acute Physiologic Score (SAPS II models in prediction of 1-month mortality of critically ill patients.Methods: The present prospective cross sectional study was performed on critically ill patients presented to emergency department during 6 months. Data required for calculation of the scores were gathered and performance of the models in prediction of 1-month mortality were assessed using STATA software 11.0.Results: 82 critically ill patients with the mean age of 53.45 ± 20.37 years were included (65.9% male. Their mortality rate was 48%. Mean SAPS II (p < 0.0001 and APACHE II (p = 0.0007 scores were significantly higher in dead patients. Area under the ROC curve of SAPS II and APACHE II for prediction of mortality were 0.75 (95% CI: 0.64 - 0.86 and 0.72 (95% CI: 0.60 - 0.83, respectively (p = 0.24. The slope and intercept of SAPS II were 1.02 and 0.04, respectively. In addition, these values were 0.92 and 0.09 for APACHE II, respectively.Conclusion: The findings of the present study showed that APACHE II and SAPS II had similar value in predicting 1-month mortality of patients. Discriminatory powers of the mentioned models were acceptable but their calibration had some amount of lack of fit, which reveals that APACHE II and SAPS II are partially perfect.

  2. Saturated properties prediction in critical region by a quartic ...

    African Journals Online (AJOL)

    A diverse substance library containing extensive PVT data for 77 pure components was used to critically evaluate the performance of a quartic equation of state and other four famous cubic equations of state in critical region. The quartic EOS studied in this work was found to significantly superior to the others in both vapor ...

  3. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  4. Prediction of critical illness in elderly outpatients using elder risk assessment: a population-based study

    Directory of Open Access Journals (Sweden)

    Biehl M

    2016-06-01

    receiver operating characteristic curve was 0.75, which indicated good discrimination. Conclusion: A simple model based on easily obtainable administrative data predicted critical illness in the next 2 years in elderly outpatients with up to 14% of the highest risk population suffering from critical illness. This model can facilitate efficient enrollment of patients into clinical programs such as care transition programs and studies aimed at the prevention of critical illness. It also can serve as a reminder to initiate advance care planning for high-risk elderly patients. External validation of this tool in different populations may enhance its generalizability. Keywords: aged, prognostication, critical care, mortality, elder risk assessment

  5. Critical assessment of methods of protein structure prediction (CASP) - round x

    KAUST Repository

    Moult, John

    2013-12-17

    This article is an introduction to the special issue of the journal PROTEINS, dedicated to the tenth Critical Assessment of Structure Prediction (CASP) experiment to assess the state of the art in protein structure modeling. The article describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. The 10 CASP experiments span almost 20 years of progress in the field of protein structure modeling, and there have been enormous advances in methods and model accuracy in that period. Notable in this round is the first sustained improvement of models with refinement methods, using molecular dynamics. For the first time, we tested the ability of modeling methods to make use of sparse experimental three-dimensional contact information, such as may be obtained from new experimental techniques, with encouraging results. On the other hand, new contact prediction methods, though holding considerable promise, have yet to make an impact in CASP testing. The nature of CASP targets has been changing in recent CASPs, reflecting shifts in experimental structural biology, with more irregular structures, more multi-domain and multi-subunit structures, and less standard versions of known folds. When allowance is made for these factors, we continue to see steady progress in the overall accuracy of models, particularly resulting from improvement of non-template regions.

  6. Model of designating the critical damages

    Directory of Open Access Journals (Sweden)

    Zwolińska Bożena

    2017-06-01

    Full Text Available Managing company in the lean way presumes no breakdowns nor reserves in the whole delivery chain. However, achieving such low indicators is impossible. That is why in some production plants it is extremely important to focus on preventive actions which can limit damages. This article depicts the method of designating the critical damages in accordance with lean maintenance method. The article consists of two parts which make for an integral body. Part one depicts the characteristic of a realistic object, it also contains productions capabilities analysis of certain areas within the production structure. Part two depicts the probabilistic model of shaping maximal time loss basing on emptying and filling interoperational buffers.

  7. Modeling Resource Hotspots: Critical Linkages and Processes

    Science.gov (United States)

    Daher, B.; Mohtar, R.; Pistikopoulos, E.; McCarl, B. A.; Yang, Y.

    2017-12-01

    Growing demands for interconnected resources emerge in the form of hotspots of varying characteristics. The business as usual allocation model cannot address the current, let alone anticipated, complex and highly interconnected resource challenges we face. A new paradigm for resource allocation must be adopted: one that identifies cross-sectoral synergies and, that moves away from silos to recognition of the nexus and integration of it. Doing so will result in new opportunities for business growth, economic development, and improved social well-being. Solutions and interventions must be multi-faceted; opportunities should be identified with holistic trade-offs in mind. No single solution fits all: different hotspots will require distinct interventions. Hotspots have varying resource constraints, stakeholders, goals and targets. The San Antonio region represents a complex resource hotspot with promising potential: its rapidly growing population, the Eagle Ford shale play, and the major agricultural activity there makes it a hotspot with many competing demands. Stakeholders need tools to allow them to knowledgeably address impending resource challenges. This study will identify contemporary WEF nexus questions and critical system interlinkages that will inform the modeling of the tightly interconnected resource systems and stresses using the San Antonio Region as a base; it will conceptualize a WEF nexus modeling framework, and develop assessment criteria to inform integrative planning and decision making.

  8. Prediction of Chemical Function: Model Development and ...

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  9. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  10. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Directory of Open Access Journals (Sweden)

    Sven Van Poucke

    Full Text Available With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension. Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM, the ETL process (Extract, Transform, Load was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  11. Critical manifold of the kagome-lattice Potts model

    Science.gov (United States)

    Lykke Jacobsen, Jesper; Scullard, Christian R.

    2012-12-01

    Any two-dimensional infinite regular lattice G can be produced by tiling the plane with a finite subgraph B⊆G we call B a basis of G. We introduce a two-parameter graph polynomial PB(q, v) that depends on B and its embedding in G. The algebraic curve PB(q, v) = 0 is shown to provide an approximation to the critical manifold of the q-state Potts model, with coupling v = eK - 1, defined on G. This curve predicts the phase diagram not only in the physical ferromagnetic regime (v > 0), but also in the antiferromagnetic (v computation of PB(q, v) can be used to detect exact solvability of the Potts model on G. We illustrate the method for two choices of G: the square lattice, where the Potts model has been exactly solved, and the kagome lattice, where it has not. For the square lattice we correctly reproduce the known phase diagram, including the antiferromagnetic transition and the singularities in the Berker-Kadanoff phase at certain Beraha numbers. For the kagome lattice, taking the smallest basis with six edges we recover a well-known (but now refuted) conjecture of F Y Wu. Larger bases provide successive improvements on this formula, giving a natural extension of Wu’s approach. We perform large-scale numerical computations for comparison and find excellent agreement with the polynomial predictions. For v > 0 the accuracy of the predicted critical coupling vc is of the order 10-4 or 10-5 for the six-edge basis, and improves to 10-6 or 10-7 for the largest basis studied (with 36 edges). This article is part of ‘Lattice models and integrability’, a special issue of Journal of Physics A: Mathematical and Theoretical in honour of F Y Wu's 80th birthday.

  12. Prediction of the Critical Curvature for LX-17 with the Time of Arrival Data from DNS

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Jin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fried, Laurence E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moss, William C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-01-10

    We extract the detonation shock front velocity, curvature and acceleration from time of arrival data measured at grid points from direct numerical simulations of a 50mm rate-stick lit by a disk-source, with the ignition and growth reaction model and a JWL equation of state calibrated for LX-17. We compute the quasi-steady (D, κ) relation based on the extracted properties and predicted the critical curvatures of LX-17. We also proposed an explicit formula that contains the failure turning point, obtained from optimization for the (D, κ) relation of LX-17.

  13. Prediction Approach of Critical Node Based on Multiple Attribute Decision Making for Opportunistic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Qifan Chen

    2016-01-01

    Full Text Available Predicting critical nodes of Opportunistic Sensor Network (OSN can help us not only to improve network performance but also to decrease the cost in network maintenance. However, existing ways of predicting critical nodes in static network are not suitable for OSN. In this paper, the conceptions of critical nodes, region contribution, and cut-vertex in multiregion OSN are defined. We propose an approach to predict critical node for OSN, which is based on multiple attribute decision making (MADM. It takes RC to present the dependence of regions on Ferry nodes. TOPSIS algorithm is employed to find out Ferry node with maximum comprehensive contribution, which is a critical node. The experimental results show that, in different scenarios, this approach can predict the critical nodes of OSN better.

  14. A critical review of lexical analysis and Big Five model

    Directory of Open Access Journals (Sweden)

    María Cristina Richaud de Minzi

    2002-06-01

    Full Text Available In the last years the idea has resurfaced that traits can be measured in a reliable and valid and this can be useful inthe prediction of human behavior. The five-factor model appears to represent a conceptual and empirical advances in the field of personality theory. Necessary orthogonal factors (Goldberg, 1992, p. 26 to show the relationships between the descriptors of the features in English is five, and its nature can be summarized through the broad concepts of Surgency, Agreeableness, Responsibility, Emotional Stability versus neuroticism and openness to experience (John, 1990, p96 Furthermore, despite the criticisms that have been given to the model, represents a breakthrough in the field of personality assessment. This approach means a contribution to the study of personality, without being the integrative model of personality.

  15. Saturated properties prediction in critical region by a quartic equation of state

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2011-08-01

    Full Text Available A diverse substance library containing extensive PVT data for 77 pure components was used to critically evaluate the performance of a quartic equation of state and other four famous cubic equations of state in critical region. The quartic EOS studied in this work was found to significantly superior to the others in both vapor pressure prediction and saturated volume prediction in vicinity of critical point.

  16. Tethered 3-min all-out test did not predict the traditional critical force parameters in inexperienced swimmers.

    Science.gov (United States)

    Kalva-Filho, Carlos A; Zagatto, Alessandro M; da Silva, Adelino S; Castanho de Araújo, Monique Y; de Almeida, Pablo B; Papoti, Marcelo

    2017-09-01

    Critical power model can be performed in tethered swimming (i.e. critical force model). Although critical force can be used to prescribe aerobic training, its determination depends on at least three exhaustive efforts in altered days. In this context, previously studies have demonstrate that critical power model can be estimated by a single 3-min all-out test (3MT), which was not investigated in swimming yet. Thus, the aim of this study was to compare the parameters obtained during the tethered swimming 3MT to those obtained during the traditional critical force model. Seven swimmers (four female and three male) underwent a tethered swimming 3MT and three exhaustive efforts to determine the traditional critical force parameters (i.e. critical force [CF] and anaerobic impulse capacity [AIC]). The critical force (CF3-MIN) and force-time integral above the CF3-MIN (AIC3-MIN) determined during the tethered 3MT were not different to CF and AIC, respectively (P value >0.55). However, these parameters were not correlated (P value >0.45). In addition, we verified large limits of agreement between CF3MIN and CF (±19.7 N), which was also observed between AIC3MIN and AIC (±0.84 Log[N·min]). These findings demonstrated that tethered 3MTs should not be used to predict traditional critical force parameters, at least when the swimmers are inexperienced in long tethered all-out efforts.

  17. Prediction of chronic critical illness in a general intensive care unit

    Directory of Open Access Journals (Sweden)

    Sérgio H. Loss

    2013-06-01

    Full Text Available OBJECTIVE: To assess the incidence, costs, and mortality associated with chronic critical illness (CCI, and to identify clinical predictors of CCI in a general intensive care unit. METHODS: This was a prospective observational cohort study. All patients receiving supportive treatment for over 20 days were considered chronically critically ill and eligible for the study. After applying the exclusion criteria, 453 patients were analyzed. RESULTS: There was an 11% incidence of CCI. Total length of hospital stay, costs, and mortality were significantly higher among patients with CCI. Mechanical ventilation, sepsis, Glasgow score < 15, inadequate calorie intake, and higher body mass index were independent predictors for cci in the multivariate logistic regression model. CONCLUSIONS: CCI affects a distinctive population in intensive care units with higher mortality, costs, and prolonged hospitalization. Factors identifiable at the time of admission or during the first week in the intensive care unit can be used to predict CCI.

  18. Teaching for Art Criticism: Incorporating Feldman's Critical Analysis Learning Model in Students' Studio Practice

    Science.gov (United States)

    Subramaniam, Maithreyi; Hanafi, Jaffri; Putih, Abu Talib

    2016-01-01

    This study adopted 30 first year graphic design students' artwork, with critical analysis using Feldman's model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students' performances in their critical ability.…

  19. Homogeneous non-equilibrium two-phase critical flow model

    International Nuclear Information System (INIS)

    Schroeder, J.J.; Vuxuan, N.

    1987-01-01

    An important aspect of nuclear and chemical reactor safety is the ability to predict the maximum or critical mass flow rate from a break or leak in a pipe system. At the beginning of such a blowdown, if the stagnation condition of the fluid is subcooled or slightly saturated thermodynamic non-equilibrium exists in the downstream, e.g. the fluid becomes superheated to a degree determined by the liquid pressure. A simplified non-equilibrium model, explained in this report, is valid for rapidly decreasing pressure along the flow path. It presumes that fluid has to be superheated by an amount governed by physical principles before it starts to flash into steam. The flow is assumed to be homogeneous, i.e. the steam and liquid velocities are equal. An adiabatic flow calculation mode (Fanno lines) is employed to evaluate the critical flow rate for long pipes. The model is found to satisfactorily describe critical flow tests. Good agreement is obtained with the large scale Marviken tests as well as with small scale experiments. (orig.)

  20. Teaching For Art Criticism: Incorporating Feldman’s Critical Analysis Learning Model In Students’ Studio Practice

    OpenAIRE

    Maithreyi Subramaniam; Jaffri Hanafi; Abu Talib Putih

    2016-01-01

    This study adopted 30 first year graphic design students’ artwork, with critical analysis using Feldman’s model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students’ performances in their critical ability. Pearson Correlation Coefficient was used to find out the correlation between students’ studio practice and art critical ability scores. The...

  1. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  2. Risk prediction of Critical Infrastructures against extreme natural hazards: local and regional scale analysis

    Science.gov (United States)

    Rosato, Vittorio; Hounjet, Micheline; Burzel, Andreas; Di Pietro, Antonio; Tofani, Alberto; Pollino, Maurizio; Giovinazzi, Sonia

    2016-04-01

    Natural hazard events can induce severe impacts on the built environment; they can hit wide and densely populated areas, where there is a large number of (inter)dependent technological systems whose damages could cause the failure or malfunctioning of further different services, spreading the impacts on wider geographical areas. The EU project CIPRNet (Critical Infrastructures Preparedness and Resilience Research Network) is realizing an unprecedented Decision Support System (DSS) which enables to operationally perform risk prediction on Critical Infrastructures (CI) by predicting the occurrence of natural events (from long term weather to short nowcast predictions, correlating intrinsic vulnerabilities of CI elements with the different events' manifestation strengths, and analysing the resulting Damage Scenario. The Damage Scenario is then transformed into an Impact Scenario, where punctual CI element damages are transformed into micro (local area) or meso (regional) scale Services Outages. At the smaller scale, the DSS simulates detailed city models (where CI dependencies are explicitly accounted for) that are of important input for crisis management organizations whereas, at the regional scale by using approximate System-of-Systems model describing systemic interactions, the focus is on raising awareness. The DSS has allowed to develop a novel simulation framework for predicting earthquakes shake maps originating from a given seismic event, considering the shock wave propagation in inhomogeneous media and the subsequent produced damages by estimating building vulnerabilities on the basis of a phenomenological model [1, 2]. Moreover, in presence of areas containing river basins, when abundant precipitations are expected, the DSS solves the hydrodynamic 1D/2D models of the river basins for predicting the flux runoff and the corresponding flood dynamics. This calculation allows the estimation of the Damage Scenario and triggers the evaluation of the Impact Scenario

  3. A Model for Teaching Critical Thinking

    Science.gov (United States)

    Emerson, Marnice K.

    2013-01-01

    In an age in which information is available almost instantly and in quantities unimagined just a few decades ago, most educators would agree that teaching adult learners to think critically about what they are reading, seeing, and hearing has never been more important. But just what is critical thinking? Do adult learners agree with educators that…

  4. Critical thinking in clinical nurse education: application of Paul's model of critical thinking.

    Science.gov (United States)

    Andrea Sullivan, E

    2012-11-01

    Nurse educators recognize that many nursing students have difficulty in making decisions in clinical practice. The ability to make effective, informed decisions in clinical practice requires that nursing students know and apply the processes of critical thinking. Critical thinking is a skill that develops over time and requires the conscious application of this process. There are a number of models in the nursing literature to assist students in the critical thinking process; however, these models tend to focus solely on decision making in hospital settings and are often complex to actualize. In this paper, Paul's Model of Critical Thinking is examined for its application to nursing education. I will demonstrate how the model can be used by clinical nurse educators to assist students to develop critical thinking skills in all health care settings in a way that makes critical thinking skills accessible to students. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Prediction of safety critical software operational reliability from test reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1999-01-01

    It has been a critical issue to predict the safety critical software reliability in nuclear engineering area. For many years, many researches have focused on the quantification of software reliability and there have been many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. User's interest is however on the operational reliability rather than on the test reliability. The experiences show that the operational reliability is higher than the test reliability. With the assumption that the difference in reliability results from the change of environment, from testing to operation, testing environment factors comprising the aging factor and the coverage factor are developed in this paper and used to predict the ultimate operational reliability with the failure data in testing phase. It is by incorporating test environments applied beyond the operational profile into testing environment factors. The application results show that the proposed method can estimate the operational reliability accurately. (Author). 14 refs., 1 tab., 1 fig

  6. A note on the criticisms against the internationalization process model

    OpenAIRE

    Hadjikhani, Amjad

    1997-01-01

    The internationalization process model introduced three decades ago still influences international business studies. Since that time, a growing number of researchers have tested the model to show its strengths and weaknesses. Among the critics, some focus on the weakness of the theoretical aspects, while others argue against parts of the model. This paper will review these criticisms and compare them with the original ideas in the internationalization model. One criticized aspect of the inter...

  7. Prediction of mortality with unmeasured anions in critically ill patients on mechanical ventilation

    Directory of Open Access Journals (Sweden)

    Novović Miloš N.

    2014-01-01

    Full Text Available Background/Aim. Acid-base disorders are common within critically ill patients. Physicochemical approach described by Stewart and modified by Figge gives precise quantification method of metabolic acidosis and insight into its main mechanisms, as well as influence of unmeasured anion on metabolic acidosis. The aims of this study were to determine whether the conventional acid-base variables are connected with survival rate of critically ill patients at Intensive care unit; whether strong ion difference/strong ion gap (SID/SIG is a better predictor of mortality rate comparing to conventional acid-base variables; to determine all significant predictable parameters for the 28-day mortality rate at intensive care units. Methods. This retrospective observational analytic study included 142 adult patients requiring mechanical ventilation, survivors (n = 68 and nonsurvivors (n = 74. Apparent strong ion difference (SIDapp, effective strong ion difference (SIDeff and SIG values were calculated with the Stewart-Figge’s quantitative biophysical method. Descriptive and analytical statistical methods were used in the study [t-test, Mann-Whitney U test, χ2-test, binary logistic regression, Reciever operating characteristic (ROC curves, calibration]. Results. Age, Na+, acute physiology and chronic health evaluation (APACHE II, Cl-, albumin, SIG, SID app, SIDeff, and aninon gap (AG were statistically significant predictors. AG represented a model with imprecise calibration, i.e. a model with little predictive power. APACHE II had p-value more than 0.05 if it was near it, and therefore it could be considered potentially unreliable for outcome prediction. SIDeff and SIG represented models with well-defined calibration. ROC analysis results showed that APACHE II, Cl-, albumin, SIDeff, SIG i AG had the largest area bellow the curve. By creation of logistic models with calibration methods, we found that outcome depends on SIG and APACHE II score. Conclusion. Based

  8. Using a Prediction Model to Manage Cyber Security Threats.

    Science.gov (United States)

    Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  9. Using a Prediction Model to Manage Cyber Security Threats

    Directory of Open Access Journals (Sweden)

    Venkatesh Jaganathan

    2015-01-01

    Full Text Available Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  10. Simple Model for Identifying Critical Regions in Atrial Fibrillation

    Science.gov (United States)

    Christensen, Kim; Manani, Kishan A.; Peters, Nicholas S.

    2015-01-01

    Atrial fibrillation (AF) is the most common abnormal heart rhythm and the single biggest cause of stroke. Ablation, destroying regions of the atria, is applied largely empirically and can be curative but with a disappointing clinical success rate. We design a simple model of activation wave front propagation on an anisotropic structure mimicking the branching network of heart muscle cells. This integration of phenomenological dynamics and pertinent structure shows how AF emerges spontaneously when the transverse cell-to-cell coupling decreases, as occurs with age, beyond a threshold value. We identify critical regions responsible for the initiation and maintenance of AF, the ablation of which terminates AF. The simplicity of the model allows us to calculate analytically the risk of arrhythmia and express the threshold value of transversal cell-to-cell coupling as a function of the model parameters. This threshold value decreases with increasing refractory period by reducing the number of critical regions which can initiate and sustain microreentrant circuits. These biologically testable predictions might inform ablation therapies and arrhythmic risk assessment.

  11. System for prediction and determination of the sub critic multiplication

    International Nuclear Information System (INIS)

    Martinez, Aquilino S.; Pereira, Valmir; Silva, Fernando C. da

    1997-01-01

    It is presented a concept of a system which may be used to calculate and anticipate the subcritical multiplication of a PWR nuclear power plant. The system is divided into two different modules. The first module allows the theoretical prediction of the subcritical multiplication factor through the solution of the multigroup diffusion equation. The second module determines this factor based on the data acquired from the neutron detectors of a NPP external nuclear detection system. (author). 3 refs., 3 figs., 2 tabs

  12. Pulsatile fluidic pump demonstration and predictive model application

    International Nuclear Information System (INIS)

    Morgan, J.G.; Holland, W.D.

    1986-04-01

    Pulsatile fluidic pumps were developed as a remotely controlled method of transferring or mixing feed solutions. A test in the Integrated Equipment Test facility demonstrated the performance of a critically safe geometry pump suitable for use in a 0.1-ton/d heavy metal (HM) fuel reprocessing plant. A predictive model was developed to calculate output flows under a wide range of external system conditions. Predictive and experimental flow rates are compared for both submerged and unsubmerged fluidic pump cases

  13. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  14. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  15. Modeling the prediction of business intelligence system effectiveness.

    Science.gov (United States)

    Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I

    2016-01-01

    Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important.

  16. A theoretical prediction of critical heat flux in subcooled pool boiling during power transients

    International Nuclear Information System (INIS)

    Pasamehmetoglu, K.O.; Nelson, R.A.; Gunnerson, F.S.

    1988-01-01

    Understanding and predicting critical heat flux (CHF) behavior during steady-state and transient conditions are of fundamenatal interest in the design, operation, safety of boiling and two-phase flow devices. This paper discusses the results of a comprehensive theoretical study made specifically to model transient CHF behavior in subcooled pool boiling. This study is based upon a simplified steady-state CHF model in terms of the vapor mass growth period. The results obtained from this theory indicate favorable agreement with the experimental data from cylindrical heaters with small radii. The statistical nature of the vapor mass behavior in transient boiling also is considered and upper and lower limits for the current theory are established. Various factors that affect the discrepancy between the data and the theory are discussed

  17. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  18. Extended Aging Theories for Predictions of Safe Operational Life of Critical Airborne Structural Components

    Science.gov (United States)

    Ko, William L.; Chen, Tony

    2006-01-01

    The previously developed Ko closed-form aging theory has been reformulated into a more compact mathematical form for easier application. A new equivalent loading theory and empirical loading theories have also been developed and incorporated into the revised Ko aging theory for the prediction of a safe operational life of airborne failure-critical structural components. The new set of aging and loading theories were applied to predict the safe number of flights for the B-52B aircraft to carry a launch vehicle, the structural life of critical components consumed by load excursion to proof load value, and the ground-sitting life of B-52B pylon failure-critical structural components. A special life prediction method was developed for the preflight predictions of operational life of failure-critical structural components of the B-52H pylon system, for which no flight data are available.

  19. Teaching For Art Criticism: Incorporating Feldman’s Critical Analysis Learning Model In Students’ Studio Practice

    Directory of Open Access Journals (Sweden)

    Maithreyi Subramaniam

    2016-01-01

    Full Text Available This study adopted 30 first year graphic design students’ artwork, with critical analysis using Feldman’s model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students’ performances in their critical ability. Pearson Correlation Coefficient was used to find out the correlation between students’ studio practice and art critical ability scores. The findings showed most students performed slightly better than average in the critical analyses and performed best in selecting analysis among the four dimensions assessed. In the context of the students’ studio practice and critical ability, findings showed there are some connections between the students’ art critical ability and studio practice.

  20. Analysis and prediction of the critical regions of antimicrobial peptides based on conditional random fields.

    Science.gov (United States)

    Chang, Kuan Y; Lin, Tung-pei; Shih, Ling-Yi; Wang, Chien-Kuo

    2015-01-01

    Antimicrobial peptides (AMPs) are potent drug candidates against microbes such as bacteria, fungi, parasites, and viruses. The size of AMPs ranges from less than ten to hundreds of amino acids. Often only a few amino acids or the critical regions of antimicrobial proteins matter the functionality. Accurately predicting the AMP critical regions could benefit the experimental designs. However, no extensive analyses have been done specifically on the AMP critical regions and computational modeling on them is either non-existent or settled to other problems. With a focus on the AMP critical regions, we thus develop a computational model AMPcore by introducing a state-of-the-art machine learning method, conditional random fields. We generate a comprehensive dataset of 798 AMPs cores and a low similarity dataset of 510 representative AMP cores. AMPcore could reach a maximal accuracy of 90% and 0.79 Matthew's correlation coefficient on the comprehensive dataset and a maximal accuracy of 83% and 0.66 MCC on the low similarity dataset. Our analyses of AMP cores follow what we know about AMPs: High in glycine and lysine, but low in aspartic acid, glutamic acid, and methionine; the abundance of α-helical structures; the dominance of positive net charges; the peculiarity of amphipathicity. Two amphipathic sequence motifs within the AMP cores, an amphipathic α-helix and an amphipathic π-helix, are revealed. In addition, a short sequence motif at the N-terminal boundary of AMP cores is reported for the first time: arginine at the P(-1) coupling with glycine at the P1 of AMP cores occurs the most, which might link to microbial cell adhesion.

  1. A quenched c = 1 critical matrix model

    International Nuclear Information System (INIS)

    Qiu, Zongan; Rey, Soo-Jong.

    1990-12-01

    We study a variant of the Penner-Distler-Vafa model, proposed as a c = 1 quantum gravity: 'quenched' matrix model with logarithmic potential. The model is exactly soluble, and exhibits a two-cut branching as observed in multicritical unitary matrix models and multicut Hermitian matrix models. Using analytic continuation of the power in the conventional polynomial potential, we also show that both the Penner-Distler-Vafa model and our 'quenched' matrix model satisfy Virasoro algebra constraints

  2. Causal explanation, intentionality, and prediction: Evaluating the Criticism of "Deductivism"

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2001-01-01

    of intentional explanation to be a candidate for being a universal law in social science. It is argued, against Popper himself, that this model fulfils Popper’s famous criterion for the demarcation of science and metaphysics, the falsifiability of the former (section 7). A third point of discussion concerns...

  3. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  4. Combined dysfunctions of immune cells predict nosocomial infection in critically ill patients.

    Science.gov (United States)

    Conway Morris, A; Anderson, N; Brittan, M; Wilkinson, T S; McAuley, D F; Antonelli, J; McCulloch, C; Barr, L C; Dhaliwal, K; Jones, R O; Haslett, C; Hay, A W; Swann, D G; Laurenson, I F; Davidson, D J; Rossi, A G; Walsh, T S; Simpson, A J

    2013-11-01

    Nosocomial infection occurs commonly in intensive care units (ICUs). Although critical illness is associated with immune activation, the prevalence of nosocomial infections suggests concomitant immune suppression. This study examined the temporal occurrence of immune dysfunction across three immune cell types, and their relationship with the development of nosocomial infection. A prospective observational cohort study was undertaken in a teaching hospital general ICU. Critically ill patients were recruited and underwent serial examination of immune status, namely percentage regulatory T-cells (Tregs), monocyte deactivation (by expression) and neutrophil dysfunction (by CD88 expression). The occurrence of nosocomial infection was determined using pre-defined, objective criteria. Ninety-six patients were recruited, of whom 95 had data available for analysis. Relative to healthy controls, percentage Tregs were elevated 6-10 days after admission, while monocyte HLA-DR and neutrophil CD88 showed broader depression across time points measured. Thirty-three patients (35%) developed nosocomial infection, and patients developing nosocomial infection showed significantly greater immune dysfunction by the measures used. Tregs and neutrophil dysfunction remained significantly predictive of infection in a Cox hazards model correcting for time effects and clinical confounders {hazard ratio (HR) 2.4 [95% confidence interval (CI) 1.1-5.4] and 6.9 (95% CI 1.6-30), respectively, P=0.001}. Cumulative immune dysfunction resulted in a progressive risk of infection, rising from no cases in patients with no dysfunction to 75% of patients with dysfunction of all three cell types (P=0.0004). Dysfunctions of T-cells, monocytes, and neutrophils predict acquisition of nosocomial infection, and combine additively to stratify risk of nosocomial infection in the critically ill.

  5. A Critical Analysis and Validation of the Accuracy of Wave Overtopping Prediction Formulae for OWECs

    Directory of Open Access Journals (Sweden)

    David Gallach-Sánchez

    2018-01-01

    Full Text Available The development of wave energy devices is growing in recent years. One type of device is the overtopping wave energy converter (OWEC, for which the knowledge of the wave overtopping rates is a basic and crucial aspect in their design. In particular, the most interesting range to study is for OWECs with steep slopes to vertical walls, and with very small freeboards and zero freeboards where the overtopping rate is maximized, and which can be generalized as steep low-crested structures. Recently, wave overtopping prediction formulae have been published for this type of structures, although their accuracy has not been fully assessed, as the overtopping data available in this range is scarce. We performed a critical analysis of the overtopping prediction formulae for steep low-crested structures and the validation of the accuracy of these formulae, based on new overtopping data for steep low-crested structures obtained at Ghent University. This paper summarizes the existing knowledge about average wave overtopping, describes the physical model tests performed, analyses the results and compares them to existing prediction formulae. The new dataset extends the wave overtopping data towards vertical walls and zero freeboard structures. In general, the new dataset validated the more recent overtopping formulae focused on steep slopes with small freeboards, although the formulae are underpredicting the average overtopping rates for very small and zero relative crest freeboards.

  6. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  7. Predictive Model Assessment for Count Data

    National Research Council Canada - National Science Library

    Czado, Claudia; Gneiting, Tilmann; Held, Leonhard

    2007-01-01

    .... In case studies, we critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. Key words: Calibration...

  8. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs......) for modeling and forecasting. It is argued that this gives models and predictions which better reflect reality. The SDE approach also offers a more adequate framework for modeling and a number of efficient tools for model building. A software package (CTSM-R) for SDE-based modeling is briefly described....... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...

  9. Model Predictive Control Fundamentals | Orukpe | Nigerian Journal ...

    African Journals Online (AJOL)

    Model Predictive Control (MPC) has developed considerably over the last two decades, both within the research control community and in industries. MPC strategy involves the optimization of a performance index with respect to some future control sequence, using predictions of the output signal based on a process model, ...

  10. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optim...

  11. Systems modeling and simulation applications for critical care medicine.

    Science.gov (United States)

    Dong, Yue; Chbat, Nicolas W; Gupta, Ashish; Hadzikadic, Mirsad; Gajic, Ognjen

    2012-06-15

    Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area.

  12. Prediction of the critical heat flux for saturated upward flow boiling water in vertical narrow rectangular channels

    International Nuclear Information System (INIS)

    Choi, Gil Sik; Chang, Soon Heung; Jeong, Yong Hoon

    2016-01-01

    A study, on the theoretical method to predict the critical heat flux (CHF) of saturated upward flow boiling water in vertical narrow rectangular channels, has been conducted. For the assessment of this CHF prediction method, 608 experimental data were selected from the previous researches, in which the heated sections were uniformly heated from both wide surfaces under the high pressure condition over 41 bar. For this purpose, representative previous liquid film dryout (LFD) models for circular channels were reviewed by using 6058 points from the KAIST CHF data bank. This shows that it is reasonable to define the initial condition of quality and entrainment fraction at onset of annular flow (OAF) as the transition to annular flow regime and the equilibrium value, respectively, and the prediction error of the LFD model is dependent on the accuracy of the constitutive equations of droplet deposition and entrainment. In the modified Levy model, the CHF data are predicted with standard deviation (SD) of 14.0% and root mean square error (RMSE) of 14.1%. Meanwhile, in the present LFD model, which is based on the constitutive equations developed by Okawa et al., the entire data are calculated with SD of 17.1% and RMSE of 17.3%. Because of its qualitative prediction trend and universal calculation convergence, the present model was finally selected as the best LFD model to predict the CHF for narrow rectangular channels. For the assessment of the present LFD model for narrow rectangular channels, effective 284 data were selected. By using the present LFD model, these data are predicted with RMSE of 22.9% with the dryout criterion of zero-liquid film flow, but RMSE of 18.7% with rivulet formation model. This shows that the prediction error of the present LFD model for narrow rectangular channels is similar with that for circular channels.

  13. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  14. Adverse Condition and Critical Event Prediction in Cranfield Multiphase Flow Facility

    DEFF Research Database (Denmark)

    Egedorf, Søren; Shaker, Hamid Reza

    2017-01-01

    Today's complex processes and plants are vulnerable to different faults, misconfiguration, non-holistic and improper control and management which cause abnormal behavior and might eventually result in poor and sub-optimal operation, dissatisfaction, damage to the plant, to personnel and resources......, or even to the environment. To cope with these, adverse condition and critical event prediction plays an important role. Adverse Condition and Critical Event Prediction Toolbox (ACCEPT) is a tool which has been recently developed by NASA to allow for a timely prediction of an adverse event, with low false...... alarm and missed detection rates. While ACCEPT has shown to be an effective tool in some applications, its performance has not yet been evaluated on practical well-known benchmark examples. In this paper, ACCEPT is used for adverse condition and critical event prediction in a multiphase flow facility...

  15. Hybrid approaches to physiologic modeling and prediction

    Science.gov (United States)

    Olengü, Nicholas O.; Reifman, Jaques

    2005-05-01

    This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.

  16. Model for the resistive critical current transition in composite superconductors

    International Nuclear Information System (INIS)

    Warnes, W.H.

    1988-01-01

    Much of the research investigating technological type-II superconducting composites relies on the measurement of the resistive critical current transition. We have developed a model for the resistive transition which improves on older models by allowing for the very different nature of monofilamentary and multifilamentary composite structures. The monofilamentary model allows for axial current flow around critical current weak links in the superconducting filament. The multifilamentary model incorporates an additional radial current transfer between neighboring filaments. The development of both models is presented. It is shown that the models are useful for extracting more information from the experimental data than was formerly possible. Specific information obtainable from the experimental voltage-current characteristic includes the distribution of critical currents in the composite, the average critical current of the distribution, the range of critical currents in the composite, the field and temperature dependence of the distribution, and the fraction of the composite dissipating energy in flux flow at any current. This additional information about the distribution of critical currents may be helpful in leading toward a better understanding of flux pinning in technological superconductors. Comparison of the models with several experiments is given and shown to be in reasonable agreement. Implications of the models for the measurement of critical currents in technological composites is presented and discussed with reference to basic flux pinning studies in such composites

  17. A Model for Critical Games Literacy

    Science.gov (United States)

    Apperley, Tom; Beavis, Catherine

    2013-01-01

    This article outlines a model for teaching both computer games and videogames in the classroom for teachers. The model illustrates the connections between in-game actions and youth gaming culture. The article explains how the out-of-school knowledge building, creation and collaboration that occurs in gaming and gaming culture has an impact on…

  18. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  19. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  20. Risk Prediction Models for Incident Heart Failure: A Systematic Review of Methodology and Model Performance.

    Science.gov (United States)

    Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M

    2017-09-01

    Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  2. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  3. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  4. Relevance of information warfare models to critical infrastructure ...

    African Journals Online (AJOL)

    This article illustrates the relevance of information warfare models to critical infrastructure protection. Analogies of information warfare models to those of information security and information systems were used to deconstruct the models into their fundamental components and this will be discussed. The models were applied ...

  5. A Global Model for Bankruptcy Prediction.

    Science.gov (United States)

    Alaminos, David; Del Castillo, Agustín; Fernández, Manuel Ángel

    2016-01-01

    The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy.

  6. CRITICAL ANALYSIS OF EVALUATION MODEL LOMCE

    Directory of Open Access Journals (Sweden)

    José Luis Bernal Agudo

    2015-06-01

    Full Text Available The evaluation model that the LOMCE projects sinks its roots into the neoliberal beliefs, reflecting a specific way of understanding the world. What matters is not the process but the results, being the evaluation the center of the education-learning processes. It presents an evil planning, since the theory that justifies the model doesn’t specify upon coherent proposals, where there is an excessive worry for excellence and diversity is left out. A comprehensive way of understanding education should be recovered.

  7. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  8. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  9. Uncertainty modelling of critical column buckling for reinforced ...

    Indian Academy of Sciences (India)

    Buckling is a critical issue for structural stability in structural design. ... This study investigates the material uncertainties on column design and proposes an uncertainty model for critical column buckling reinforced concrete buildings. ... Civil Engineering Department, Suleyman Demirel University, Isparta 32260, Turkey ...

  10. Causal Measurement Models: Can Criticism Stimulate Clarification?

    Science.gov (United States)

    Markus, Keith A.

    2016-01-01

    In their 2016 work, Aguirre-Urreta et al. provided a contribution to the literature on causal measurement models that enhances clarity and stimulates further thinking. Aguirre-Urreta et al. presented a form of statistical identity involving mapping onto the portion of the parameter space involving the nomological net, relationships between the…

  11. [Predictive value of four pediatric scores of critical illness and mortality on evaluating mortality risk in pediatric critical patients].

    Science.gov (United States)

    Zhang, Lidan; Huang, Huimin; Cheng, Yucai; Xu, Lingling; Huang, Xueqiong; Pei, Yuxin; Tang, Wen; Qin, Zhaoyuan

    2018-01-01

    To assess the performance of pediatric clinical illness score (PCIS), pediatric risk of mortality score III (PRISM III), pediatric logistic organ dysfunction score 2 (PELOD-2), and pediatric multiple organ dysfunction score (P-MODS) in predicting mortality in critically ill pediatric patients. The data of critically ill pediatric patients admitted to Pediatric Intensive Care Unit (PICU) of First Affiliated Hospital of Sun Yat-Sen University from August 2012 to May 2017 were retrospectively analyzed. The gender, age, basic diseases, the length of PICU stay were collected. The children were divided into survival group and non-survival group according to the clinical outcome during hospitalization. The variables of PCIS, PRISM III, PELOD-2, and P-MODS were collected and scored. Receiver operating characteristic (ROC) curve was plotted, the efficiency of PCIS, PRISM III, PELOD-2, and P-MODS for predicting death were evaluated by the area under ROC curve (AUC). Hosmer-Lemeshow goodness of fit test was used to evaluate the fitting degree of each scoring system to predict the mortality and the actual mortality. Of 461 critically ill children, 35 children were excluded because of serious data loss, hospital stay not exceeding 24 hours, and death within 8 hours after admission. Finally, a total of 426 pediatric patients were enrolled in this study. 355 pediatric patients were survived, while 71 were not survived during hospitalization, with the mortality of 16.7%. There was no significant difference in gender, age, underlying diseases or length of PICU stay between the two groups. PCIS score in non-survival group was significantly lower than that of survival group [80 (76, 88) vs. 86 (80, 92)], and PRISM III, PELOD-2 and P-MODS scores were significantly increased [PRISM III: 16 (13, 22) vs. 12 (10, 15), PELOD-2: 6 (5, 9) vs. 4 (2, 5), P-MODS: 6 (4, 9) vs. 3 (2, 6), all P < 0.01]. ROC curve analysis showed that the AUCs of PCIS, PRISM III, PELOD-2, and P-MODS for predicting

  12. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  13. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  14. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  15. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  16. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. The U(1)-Higgs model: critical behaviour in the confining-Higgs region

    International Nuclear Information System (INIS)

    Alonso, J.L.; Azcoiti, V.; Campos, I.; Ciria, J.C.; Cruz, A.; Iniguez, D.; Lesmes, F.; Piedrafita, C.; Rivero, A.; Tarancon, A.; Badoni, D.; Fernandez, L.A.; Munoz Sudupe, A.; Ruiz-Lorenzo, J.J.; Gonzalez-Arroyo, A.; Martinez, P.; Pech, J.; Tellez, P.

    1993-01-01

    We study numerically the critical properties of the U(1)-Higgs lattice model, with fixed Higgs modulus, in the region of small gauge coupling where the Higgs and confining phases merge. We find evidence for a first-order transition line that ends in a second-order point. By means of a rotation in parameter space we introduce thermodynamic magnitudes and critical exponents in close resemblance with simple models that show analogous critical behaviour. The measured data allow us to fit the critical exponents finding values in agreement with the mean-field prediction. The location of the critical point and the slope of the first-order line are accurately measured. (orig.)

  5. Working Towards a Risk Prediction Model for Neural Tube Defects

    Science.gov (United States)

    Agopian, A.J.; Lupo, Philip J.; Tinker, Sarah C.; Canfield, Mark A.; Mitchell, Laura E.

    2015-01-01

    BACKGROUND Several risk factors have been consistently associated with neural tube defects (NTDs). However, the predictive ability of these risk factors in combination has not been evaluated. METHODS To assess the predictive ability of established risk factors for NTDs, we built predictive models using data from the National Birth Defects Prevention Study, which is a large, population-based study of nonsyndromic birth defects. Cases with spina bifida or anencephaly, or both (n = 1239), and controls (n = 8494) were randomly divided into separate training (75% of cases and controls) and validation (remaining 25%) samples. Multivariable logistic regression models were constructed with the training samples. The predictive ability of these models was evaluated in the validation samples by assessing the area under the receiver operator characteristic curves. An ordinal predictive risk index was also constructed and evaluated. In addition, the ability of classification and regression tree (CART) analysis to identify subgroups of women at increased risk for NTDs in offspring was evaluated. RESULTS The predictive ability of the multivariable models was poor (area under the receiver operating curve: 0.55 for spina bifida only, 0.59 for anencephaly only, and 0.56 for anencephaly and spina bifida combined). The predictive abilities of the ordinal risk indexes and CART models were also low. CONCLUSION Current established risk factors for NTDs are insufficient for population-level prediction of a women’s risk for having affected offspring. Identification of genetic risk factors and novel nongenetic risk factors will be critical to establishing models, with good predictive ability, for NTDs. PMID:22253139

  6. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  7. Competency-Based Model for Predicting Construction Project Managers Performance

    OpenAIRE

    Dainty, A. R. J.; Cheng, M.; Moore, D. R.

    2005-01-01

    Using behavioral competencies to influence human resource management decisions is gaining popularity in business organizations. This study identifies the core competencies associated with the construction management role and further, develops a predictive model to inform human resource selection and development decisions within large construction organizations. A range of construction managers took part in behavioral event interviews where staffs were asked to recount critical management inci...

  8. Critical fluctuations in cortical models near instability

    Directory of Open Access Journals (Sweden)

    Matthew J. Aburn

    2012-08-01

    Full Text Available Computational studies often proceed from the premise that cortical dynamics operate in a linearly stable domain, where fluctuations dissipate quickly and show only short memory. Studies of human EEG, however, have shown significant autocorrelation at time lags on the scale of minutes, indicating the need to consider regimes where nonlinearities influence the dynamics. Statistical properties such as increased autocorrelation length, increased variance, power-law scaling and bistable switching have been suggested as generic indicators of the approach to bifurcation in nonlinear dynamical systems. We study temporal fluctuations in a widely-employed computational model (the Jansen-Rit model of cortical activity, examining the statistical signatures that accompany bifurcations. Approaching supercritical Hopf bifurcations through tuning of the background excitatory input, we find a dramatic increase in the autocorrelation length that depends sensitively on the direction in phase space of the input fluctuations and hence on which neuronal subpopulation is stochastically perturbed. Similar dependence on the input direction is found in the distribution of fluctuation size and duration, which show power law scaling that extends over four orders of magnitude at the Hopf bifurcation. We conjecture that the alignment in phase space between the input noise vector and the center manifold of the Hopf bifurcation is directly linked to these changes. These results are consistent with the possibility of statistical indicators of linear instability being detectable in real EEG time series. However, even in a simple cortical model, we find that these indicators may not necessarily be visible even when bifurcations are present because their expression can depend sensitively on the neuronal pathway of incoming fluctuations.

  9. Predicting and Modeling RNA Architecture

    Science.gov (United States)

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  10. Multiple Steps Prediction with Nonlinear ARX Models

    OpenAIRE

    Zhang, Qinghua; Ljung, Lennart

    2007-01-01

    NLARX (NonLinear AutoRegressive with eXogenous inputs) models are frequently used in black-box nonlinear system identication. Though it is easy to make one step ahead prediction with such models, multiple steps prediction is far from trivial. The main difficulty is that in general there is no easy way to compute the mathematical expectation of an output conditioned by past measurements. An optimal solution would require intensive numerical computations related to nonlinear filltering. The pur...

  11. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  12. Model complexity control for hydrologic prediction

    Science.gov (United States)

    Schoups, G.; van de Giesen, N. C.; Savenije, H. H. G.

    2008-12-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore needed. We compare three model complexity control methods for hydrologic prediction, namely, cross validation (CV), Akaike's information criterion (AIC), and structural risk minimization (SRM). Results show that simulation of water flow using non-physically-based models (polynomials in this case) leads to increasingly better calibration fits as the model complexity (polynomial order) increases. However, prediction uncertainty worsens for complex non-physically-based models because of overfitting of noisy data. Incorporation of physically based constraints into the model (e.g., storage-discharge relationship) effectively bounds prediction uncertainty, even as the number of parameters increases. The conclusion is that overparameterization and equifinality do not lead to a continued increase in prediction uncertainty, as long as models are constrained by such physical principles. Complexity control of hydrologic models reduces parameter equifinality and identifies the simplest model that adequately explains the data, thereby providing a means of hydrologic generalization and classification. SRM is a promising technique for this purpose, as it (1) provides analytic upper bounds on prediction uncertainty, hence avoiding the computational burden of CV, and (2) extends the applicability of classic methods such as AIC to finite data. The main hurdle in applying SRM is the need for an a priori estimation of the complexity of the hydrologic model, as measured by its Vapnik-Chernovenkis (VC) dimension. Further research is needed in this area.

  13. Comparison of the models of financial distress prediction

    Directory of Open Access Journals (Sweden)

    Jiří Omelka

    2013-01-01

    Full Text Available Prediction of the financial distress is generally supposed as approximation if a business entity is closed on bankruptcy or at least on serious financial problems. Financial distress is defined as such a situation when a company is not able to satisfy its liabilities in any forms, or when its liabilities are higher than its assets. Classification of financial situation of business entities represents a multidisciplinary scientific issue that uses not only the economic theoretical bases but interacts to the statistical, respectively to econometric approaches as well.The first models of financial distress prediction have originated in the sixties of the 20th century. One of the most known is the Altman’s model followed by a range of others which are constructed on more or less conformable bases. In many existing models it is possible to find common elements which could be marked as elementary indicators of potential financial distress of a company. The objective of this article is, based on the comparison of existing models of prediction of financial distress, to define the set of basic indicators of company’s financial distress at conjoined identification of their critical aspects. The sample defined this way will be a background for future research focused on determination of one-dimensional model of financial distress prediction which would subsequently become a basis for construction of multi-dimensional prediction model.

  14. Durability and life prediction modeling in polyimide composites

    Science.gov (United States)

    Binienda, Wieslaw K.

    1995-01-01

    Sudden appearance of cracks on a macroscopically smooth surface of brittle materials due to cooling or drying shrinkage is a phenomenon related to many engineering problems. Although conventional strength theories can be used to predict the necessary condition for crack appearance, they are unable to predict crack spacing and depth. On the other hand, fracture mechanics theory can only study the behavior of existing cracks. The theory of crack initiation can be summarized into three conditions, which is a combination of a strength criterion and laws of energy conservation, the average crack spacing and depth can thus be determined. The problem of crack initiation from the surface of an elastic half plane is solved and compares quite well with available experimental evidence. The theory of crack initiation is also applied to concrete pavements. The influence of cracking is modeled by the additional compliance according to Okamura's method. The theoretical prediction by this structural mechanics type of model correlates very well with the field observation. The model may serve as a theoretical foundation for future pavement joint design. The initiation of interactive cracks of quasi-brittle material is studied based on a theory of cohesive crack model. These cracks may grow simultaneously, or some of them may close during certain stages. The concept of crack unloading of cohesive crack model is proposed. The critical behavior (crack bifurcation, maximum loads) of the cohesive crack model are characterized by rate equations. The post-critical behavior of crack initiation is also studied.

  15. Federated Modelling and Simulation for Critical Infrastructure Protection

    NARCIS (Netherlands)

    Rome, E.; Langeslag, P.J.H.; Usov, A.

    2014-01-01

    Modelling and simulation is an important tool for Critical Infrastructure (CI) dependency analysis, for testing methods for risk reduction, and as well for the evaluation of past failures. Moreover, interaction of such simulations with external threat models, e.g., a river flood model, or economic

  16. Uncertainty modelling of critical column buckling for reinforced ...

    Indian Academy of Sciences (India)

    gates the material uncertainties on column design and proposes an uncertainty model for critical ... ances the accuracy of the structural models by using experimental results and design codes. (Baalbaki et al ..... Elishakoff I 1999 Whys and hows in uncertainty modeling, probability, fuzziness and anti-optimization. New York: ...

  17. Critical Comments on the General Model of Instructional Communication

    Science.gov (United States)

    Walton, Justin D.

    2014-01-01

    This essay presents a critical commentary on McCroskey et al.'s (2004) general model of instructional communication. In particular, five points are examined which make explicit and problematize the meta-theoretical assumptions of the model. Comments call attention to the limitations of the model and argue for a broader approach to…

  18. Quantifying predictive accuracy in survival models.

    Science.gov (United States)

    Lirette, Seth T; Aban, Inmaculada

    2017-12-01

    For time-to-event outcomes in medical research, survival models are the most appropriate to use. Unlike logistic regression models, quantifying the predictive accuracy of these models is not a trivial task. We present the classes of concordance (C) statistics and R 2 statistics often used to assess the predictive ability of these models. The discussion focuses on Harrell's C, Kent and O'Quigley's R 2 , and Royston and Sauerbrei's R 2 . We present similarities and differences between the statistics, discuss the software options from the most widely used statistical analysis packages, and give a practical example using the Worcester Heart Attack Study dataset.

  19. Predictive power of nuclear-mass models

    Directory of Open Access Journals (Sweden)

    Yu. A. Litvinov

    2013-12-01

    Full Text Available Ten different theoretical models are tested for their predictive power in the description of nuclear masses. Two sets of experimental masses are used for the test: the older set of 2003 and the newer one of 2011. The predictive power is studied in two regions of nuclei: the global region (Z, N ≥ 8 and the heavy-nuclei region (Z ≥ 82, N ≥ 126. No clear correlation is found between the predictive power of a model and the accuracy of its description of the masses.

  20. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  1. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  2. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  3. Vison excitations in near-critical quantum dimer models

    Science.gov (United States)

    Strübi, G.; Ivanov, D. A.

    2011-06-01

    We study vison excitations in a quantum dimer model interpolating between the Rokhsar-Kivelson models on the square and triangular lattices. In the square-lattice case, the model is known to be critical and characterized by U(1) topological quantum numbers. Introducing diagonal dimers brings the model to a Z2 resonating-valence-bond phase. We study variationally the emergence of vison excitations at low concentration of diagonal dimers, close to the critical point. We find that, in this regime, vison excitations are large in size and their structure resembles vortices in type-II superconductors.

  4. Critical Features Predicting Sustained Implementation of School-Wide Positive Behavior Support

    Science.gov (United States)

    Mathews, Susanna; McIntosh, Kent; Frank, Jennifer; May, Seth

    2014-01-01

    The current study explored the extent to which a common measure of perceived implementation of critical features of School-wide Positive Behavior Support (SWPBS) predicted fidelity of implementation 3 years later. Respondents included school personnel from 261 schools across the United States implementing SWPBS. School teams completed the…

  5. Prediction of critical heat flux for water in uniformly heated vertical ...

    African Journals Online (AJOL)

    This paper includes the prediction of critical heat flux (CHF) for uniformly heated vertical porous coated tubes at pressures between 0,1 to 0,7 MPa. In this study, we use a total of 742 data points of CHF for water in uniformly heated vertical porous coated tubes obtained from literature. Accuracy of correlations was estimated ...

  6. ACCEPT: Introduction of the Adverse Condition and Critical Event Prediction Toolbox

    Science.gov (United States)

    Martin, Rodney A.; Santanu, Das; Janakiraman, Vijay Manikandan; Hosein, Stefan

    2015-01-01

    The prediction of anomalies or adverse events is a challenging task, and there are a variety of methods which can be used to address the problem. In this paper, we introduce a generic framework developed in MATLAB (sup registered mark) called ACCEPT (Adverse Condition and Critical Event Prediction Toolbox). ACCEPT is an architectural framework designed to compare and contrast the performance of a variety of machine learning and early warning algorithms, and tests the capability of these algorithms to robustly predict the onset of adverse events in any time-series data generating systems or processes.

  7. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  8. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  9. Critical percolation in the slow cooling of the bi-dimensional ferromagnetic Ising model

    Science.gov (United States)

    Ricateau, Hugo; Cugliandolo, Leticia F.; Picco, Marco

    2018-01-01

    We study, with numerical methods, the fractal properties of the domain walls found in slow quenches of the kinetic Ising model to its critical temperature. We show that the equilibrium interfaces in the disordered phase have critical percolation fractal dimension over a wide range of length scales. We confirm that the system falls out of equilibrium at a temperature that depends on the cooling rate as predicted by the Kibble-Zurek argument and we prove that the dynamic growing length once the cooling reaches the critical point satisfies the same scaling. We determine the dynamic scaling properties of the interface winding angle variance and we show that the crossover between critical Ising and critical percolation properties is determined by the growing length reached when the system fell out of equilibrium.

  10. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  11. Addressing Learning Style Criticism: The Unified Learning Style Model Revisited

    Science.gov (United States)

    Popescu, Elvira

    Learning style is one of the individual differences that play an important but controversial role in the learning process. This paper aims at providing a critical analysis regarding learning styles and their use in technology enhanced learning. The identified criticism issues are addressed by reappraising the so called Unified Learning Style Model (ULSM). A detailed description of the ULSM components is provided, together with their rationale. The practical applicability of the model in adaptive web-based educational systems and its advantages versus traditional learning style models are also outlined.

  12. Model predictive controller design of hydrocracker reactors

    OpenAIRE

    GÖKÇE, Dila

    2014-01-01

    This study summarizes the design of a Model Predictive Controller (MPC) in Tüpraş, İzmit Refinery Hydrocracker Unit Reactors. Hydrocracking process, in which heavy vacuum gasoil is converted into lighter and valuable products at high temperature and pressure is described briefly. Controller design description, identification and modeling studies are examined and the model variables are presented. WABT (Weighted Average Bed Temperature) equalization and conversion increase are simulate...

  13. The Assessment of Risk in Cardiothoracic Intensive Care (ARCtIC): prediction of hospital mortality after admission to cardiothoracic critical care.

    Science.gov (United States)

    Shahin, J; Ferrando-Vivas, P; Power, G S; Biswas, S; Webb, S T; Rowan, K M; Harrison, D A

    2016-12-01

    The models used to predict outcome after adult general critical care may not be applicable to cardiothoracic critical care. Therefore, we analysed data from the Case Mix Programme to identify variables associated with hospital mortality after admission to cardiothoracic critical care units and to develop a risk-prediction model. We derived predictive models for hospital mortality from variables measured in 17,002 patients within 24 h of admission to five cardiothoracic critical care units. The final model included 10 variables: creatinine; white blood count; mean arterial blood pressure; functional dependency; platelet count; arterial pH; age; Glasgow Coma Score; arterial lactate; and route of admission. We included additional interaction terms between creatinine, lactate, platelet count and cardiac surgery as the admitting diagnosis. We validated this model against 10,238 other admissions, for which the c index (95% CI) was 0.904 (0.89-0.92) and the Brier score was 0.055, while the slope and intercept of the calibration plot were 0.961 and -0.183, respectively. The discrimination and calibration of our model suggest that it might be used to predict hospital mortality after admission to cardiothoracic critical care units. © 2016 The Association of Anaesthetists of Great Britain and Ireland.

  14. The difference between critical care initiation anion gap and prehospital admission anion gap is predictive of mortality in critical illness.

    Science.gov (United States)

    Lipnick, Michael S; Braun, Andrea B; Cheung, Joyce Ting-Wai; Gibbons, Fiona K; Christopher, Kenneth B

    2013-01-01

    We hypothesized that the delta anion gap defined as difference between critical care initiation standard anion gap and prehospital admission standard anion gap is associated with all cause mortality in the critically ill. Observational cohort study. Two hundred nine medical and surgical intensive care beds in two hospitals in Boston, MA. Eighteen thousand nine hundred eighty-five patients, age ≥18 yrs, who received critical care between 1997 and 2007. The exposure of interest was delta anion gap and categorized a priori as 10 mEq/L. Logistic regression examined death by days 30, 90, and 365 postcritical care initiation and in-hospital mortality. Adjusted odds ratios were estimated by multivariable logistic regression models. The discrimination of delta anion gap for 30-day mortality was evaluated using receiver operator characteristic curves performed for a subset of patients with all laboratory data required to analyze the data via physical chemical principles (n = 664). None. Delta anion gap was a particularly strong predictor of 30-day mortality with a significant risk gradient across delta anion gap quartiles following multivariable adjustment: delta anion gap anion gap 5-10 mEq/L odds ratio 1.56 (95% confidence interval 1.35-1.81; p anion gap >10 mEq/L odds ratio 2.18 (95% confidence interval 1.76-2.71; p anion gap 0-5 mEq/L. Similar significant robust associations post multivariable adjustments are seen with death by days 90 and 365 as well as in-hospital mortality. Correcting for albumin or limiting the cohort to patients with standard anion gap at critical care initiation of 10-18 mEq/L did not materially change the delta anion gap-mortality association. Delta anion gap has similarly moderate discriminative ability for 30-day mortality in comparison to standard base excess and strong ion gap. An increase in standard anion gap at critical care initiation relative to prehospital admission standard anion gap is a predictor of the risk of all cause patient

  15. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  16. Critical manifold of the Potts model: Exact results and homogeneity approximation

    Science.gov (United States)

    Wu, F. Y.; Guo, Wenan

    2012-08-01

    The q-state Potts model has stood at the frontier of research in statistical mechanics for many years. In the absence of a closed-form solution, much of the past effort has focused on locating its critical manifold, trajectory in the parameter {q,eJ} space where J is the reduced interaction, along which the free energy is singular. However, except in isolated cases, antiferromagnetic (AF) models with J0. We also locate its critical frontier for JLondon Ser. A 388, 43 (1982)]. For the honeycomb lattice we show that the known critical frontier holds for all J, and determine its critical qc=(1)/(2)(3+5)=2.61803 beyond which there is no transition. For the triangular lattice we confirm the known critical frontier to hold only for J>0. More generally we consider the centered-triangle (CT) and Union-Jack (UJ) lattices consisting of mixed J and K interactions, and deduce critical manifolds under homogeneity hypotheses. For K=0 the CT lattice is the diced lattice, and we determine its critical manifold for all J and find qc=3.32472. For K=0 the UJ lattice is the square lattice and from this we deduce both the J>0 and J<0 critical manifolds and qc=3. Our theoretical predictions are compared with recent numerical results.

  17. Thermodynamic modeling of activity coefficient and prediction of solubility: Part 1. Predictive models.

    Science.gov (United States)

    Mirmehrabi, Mahmoud; Rohani, Sohrab; Perry, Luisa

    2006-04-01

    A new activity coefficient model was developed from excess Gibbs free energy in the form G(ex) = cA(a) x(1)(b)...x(n)(b). The constants of the proposed model were considered to be function of solute and solvent dielectric constants, Hildebrand solubility parameters and specific volumes of solute and solvent molecules. The proposed model obeys the Gibbs-Duhem condition for activity coefficient models. To generalize the model and make it as a purely predictive model without any adjustable parameters, its constants were found using the experimental activity coefficient and physical properties of 20 vapor-liquid systems. The predictive capability of the proposed model was tested by calculating the activity coefficients of 41 binary vapor-liquid equilibrium systems and showed good agreement with the experimental data in comparison with two other predictive models, the UNIFAC and Hildebrand models. The only data used for the prediction of activity coefficients, were dielectric constants, Hildebrand solubility parameters, and specific volumes of the solute and solvent molecules. Furthermore, the proposed model was used to predict the activity coefficient of an organic compound, stearic acid, whose physical properties were available in methanol and 2-butanone. The predicted activity coefficient along with the thermal properties of the stearic acid were used to calculate the solubility of stearic acid in these two solvents and resulted in a better agreement with the experimental data compared to the UNIFAC and Hildebrand predictive models.

  18. Modified Critical State Two-Surface Plasticity Model for Sands

    DEFF Research Database (Denmark)

    Sørensen, Kris Wessel; Nielsen, Søren Kjær; Shajarati, Amir

    This article describes the outline of a numerical integration scheme for a critical state two-surface plasticity model for sands. The model is slightly modified by LeBlanc (2008) compared to the original formulation presented by Manzari and Dafalias (1997) and has the ability to correctly model...... the stress-strain response of sands. The model is versatile and can be used to simulate drained and undrained conditions, due to the fact that the model can efficiently calculate change in void ratio as well as pore pressure. The objective of the constitutive model is to investigate if the numerical...

  19. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  20. A self-organized criticality model for plasma transport

    International Nuclear Information System (INIS)

    Carreras, B.A.; Newman, D.; Lynch, V.E.

    1996-01-01

    Many models of natural phenomena manifest the basic hypothesis of self-organized criticality (SOC). The SOC concept brings together the self-similarity on space and time scales that is common to many of these phenomena. The application of the SOC modelling concept to the plasma dynamics near marginal stability opens new possibilities of understanding issues such as Bohm scaling, profile consistency, broad band fluctuation spectra with universal characteristics and fast time scales. A model realization of self-organized criticality for plasma transport in a magnetic confinement device is presented. The model is based on subcritical resistive pressure-gradient-driven turbulence. Three-dimensional nonlinear calculations based on this model show the existence of transport under subcritical conditions. This model that includes fluctuation dynamics leads to results very similar to the running sandpile paradigm

  1. A revised prediction model for natural conception.

    Science.gov (United States)

    Bensdorp, Alexandra J; van der Steeg, Jan Willem; Steures, Pieternel; Habbema, J Dik F; Hompes, Peter G A; Bossuyt, Patrick M M; van der Veen, Fulco; Mol, Ben W J; Eijkemans, Marinus J C

    2017-06-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis was to assess whether additional predictors can refine the Hunault model and extend its applicability. Consecutive subfertile couples with unexplained and mild male subfertility presenting in fertility clinics were asked to participate in a prospective cohort study. We constructed a multivariable prediction model with the predictors from the Hunault model and new potential predictors. The primary outcome, natural conception leading to an ongoing pregnancy, was observed in 1053 women of the 5184 included couples (20%). All predictors of the Hunault model were selected into the revised model plus an additional seven (woman's body mass index, cycle length, basal FSH levels, tubal status,history of previous pregnancies in the current relationship (ongoing pregnancies after natural conception, fertility treatment or miscarriages), semen volume, and semen morphology. Predictions from the revised model seem to concur better with observed pregnancy rates compared with the Hunault model; c-statistic of 0.71 (95% CI 0.69 to 0.73) compared with 0.59 (95% CI 0.57 to 0.61). Copyright © 2017. Published by Elsevier Ltd.

  2. `Dhara': An Open Framework for Critical Zone Modeling

    Science.gov (United States)

    Le, P. V.; Kumar, P.

    2016-12-01

    Processes in the Critical Zone, which sustain terrestrial life, are tightly coupled across hydrological, physical, biological, chemical, pedological, geomorphological and ecological domains over both short and long timescales. Observations and quantification of the Earth's surface across these domains using emerging high resolution measurement technologies such as light detection and ranging (lidar) and hyperspectral remote sensing are enabling us to characterize fine scale landscape attributes over large spatial areas. This presents a unique opportunity to develop novel approaches to model the Critical Zone that can capture fine scale intricate dependencies across the different processes in 3D. The development of interdisciplinary tools that transcend individual disciplines and capture new levels of complexity and emergent properties is at the core of Critical Zone science. Here we introduce an open framework for high-performance computing model (`Dhara') for modeling complex processes in the Critical Zone. The framework is designed to be modular in structure with the aim to create uniform and efficient tools to facilitate and leverage process modeling. It also provides flexibility to maintain, collaborate, and co-develop additional components by the scientific community. We show the essential framework that simulates ecohydrologic dynamics, and surface - sub-surface coupling in 3D using hybrid parallel CPU-GPU. We demonstrate that the open framework in Dhara is feasible for detailed, multi-processes, and large-scale modeling of the Critical Zone, which opens up exciting possibilities. We will also present outcomes from a Modeling Summer Institute led by Intensively Managed Critical Zone Observatory (IMLCZO) with representation from several CZOs and international representatives.

  3. OBESITY AND CRITICAL ILLNESS: INSIGHTS FROM ANIMAL MODELS.

    Science.gov (United States)

    Mittwede, Peter N; Clemmer, John S; Bergin, Patrick F; Xiang, Lusha

    2016-04-01

    Critical illness is a major cause of morbidity and mortality around the world. While obesity is often detrimental in the context of trauma, it is paradoxically associated with improved outcomes in some septic patients. The reasons for these disparate outcomes are not well understood. A number of animal models have been used to study the obese response to various forms of critical illness. Just as there have been many animal models that have attempted to mimic clinical conditions, there are many clinical scenarios that can occur in the highly heterogeneous critically ill patient population that occupies hospitals and intensive care units. This poses a formidable challenge for clinicians and researchers attempting to understand the mechanisms of disease and develop appropriate therapies and treatment algorithms for specific subsets of patients, including the obese. The development of new, and the modification of existing animal models, is important in order to bring effective treatments to a wide range of patients. Not only do experimental variables need to be matched as closely as possible to clinical scenarios, but animal models with pre-existing comorbid conditions need to be studied. This review briefly summarizes animal models of hemorrhage, blunt trauma, traumatic brain injury, and sepsis. It also discusses what has been learned through the use of obese models to study the pathophysiology of critical illness in light of what has been demonstrated in the clinical literature.

  4. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  5. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  6. Predicting turns in proteins with a unified model.

    Directory of Open Access Journals (Sweden)

    Qi Song

    Full Text Available MOTIVATION: Turns are a critical element of the structure of a protein; turns play a crucial role in loops, folds, and interactions. Current prediction methods are well developed for the prediction of individual turn types, including α-turn, β-turn, and γ-turn, etc. However, for further protein structure and function prediction it is necessary to develop a uniform model that can accurately predict all types of turns simultaneously. RESULTS: In this study, we present a novel approach, TurnP, which offers the ability to investigate all the turns in a protein based on a unified model. The main characteristics of TurnP are: (i using newly exploited features of structural evolution information (secondary structure and shape string of protein based on structure homologies, (ii considering all types of turns in a unified model, and (iii practical capability of accurate prediction of all turns simultaneously for a query. TurnP utilizes predicted secondary structures and predicted shape strings, both of which have greater accuracy, based on innovative technologies which were both developed by our group. Then, sequence and structural evolution features, which are profile of sequence, profile of secondary structures and profile of shape strings are generated by sequence and structure alignment. When TurnP was validated on a non-redundant dataset (4,107 entries by five-fold cross-validation, we achieved an accuracy of 88.8% and a sensitivity of 71.8%, which exceeded the most state-of-the-art predictors of certain type of turn. Newly determined sequences, the EVA and CASP9 datasets were used as independent tests and the results we achieved were outstanding for turn predictions and confirmed the good performance of TurnP for practical applications.

  7. A Simple Predictive Method of Critical Flicker Detection for Human Healthy Precaution

    Directory of Open Access Journals (Sweden)

    Goh Zai Peng

    2015-01-01

    Full Text Available Interharmonics and flickers have an interrelationship between each other. Based on International Electrotechnical Commission (IEC flicker standard, the critical flicker frequency for a human eye is located at 8.8 Hz. Additionally, eye strains, headaches, and in the worst case seizures may happen due to the critical flicker. Therefore, this paper introduces a worthwhile research gap on the investigation of interrelationship between the amplitudes of the interharmonics and the critical flicker for 50 Hz power system. Consequently, the significant findings obtained in this paper are the amplitudes of two particular interharmonics are able to detect the critical flicker. In this paper, the aforementioned amplitudes are detected by adaptive linear neuron (ADALINE. After that, the critical flicker is detected by substituting the aforesaid amplitudes to the formulas that have been generated in this paper accordingly. Simulation and experimental works are conducted and the accuracy of the proposed algorithm which utilizes ADALINE is similar, as compared to typical Fluke power analyzer. In a nutshell, this simple predictive method for critical flicker detection has strong potential to be applied in any human crowded places (such as offices, shopping complexes, and stadiums for human healthy precaution purpose due to its simplicity.

  8. Posterior Predictive Model Checking for Conjunctive Multidimensionality in Item Response Theory

    Science.gov (United States)

    Levy, Roy

    2011-01-01

    If data exhibit multidimensionality, key conditional independence assumptions of unidimensional models do not hold. The current work pursues posterior predictive model checking (PPMC) as a tool for criticizing models due to unaccounted for dimensions in data structures that follow conjunctive multidimensional models. These pursuits are couched in…

  9. Prediction of fluid phase equilibrium of ternary mixtures in the critical region and the modified Leung-Griffiths theory

    Science.gov (United States)

    Lynch, John J.; Rainwater, James C.; van Poolen, Lambert J.; Smith, Duane H.

    1992-02-01

    The modified Leung-Griffiths theory of vapor-liquid equilibrium (VLE) is generalized to the case of three components. The principle of 'corresponding states' is reconsidered along with certain functions of 'field variables' within the model. The mathematical form of the coexistence boundary in terms of the field variables remains practically unchanged and conforms to modern scaling theory. The new model essentially predicts ternary fluid mixture phase boundaries in the critical region from previous vapor-liquid equilibrium data correlations of the three binary fluid mixture limits. Predicted saturation isotherms of the ethane + n-butane + n-pentane and ethane + n-butane + n-heptane mixtures are compared with experimental ternary VLE data in the literature.

  10. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....... and controlled have thus become essential factors for efficient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona...

  11. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior......One of the major challenges with the increase in wind power generation is the uncertain nature of wind speed. So far the uncertainty about wind speed has been presented through probability distributions. Also the existing models that consider the uncertainty of the wind speed primarily view...

  12. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  13. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  14. Predictive modeling in homogeneous catalysis: a tutorial

    NARCIS (Netherlands)

    Maldonado, A.G.; Rothenberg, G.

    2010-01-01

    Predictive modeling has become a practical research tool in homogeneous catalysis. It can help to pinpoint ‘good regions’ in the catalyst space, narrowing the search for the optimal catalyst for a given reaction. Just like any other new idea, in silico catalyst optimization is accepted by some

  15. Model predictive control of smart microgrids

    DEFF Research Database (Denmark)

    Hu, Jiefeng; Zhu, Jianguo; Guerrero, Josep M.

    2014-01-01

    required to realise high-performance of distributed generations and will realise innovative control techniques utilising model predictive control (MPC) to assist in coordinating the plethora of generation and load combinations, thus enable the effective exploitation of the clean renewable energy sources...

  16. Feedback model predictive control by randomized algorithms

    NARCIS (Netherlands)

    Batina, Ivo; Stoorvogel, Antonie Arij; Weiland, Siep

    2001-01-01

    In this paper we present a further development of an algorithm for stochastic disturbance rejection in model predictive control with input constraints based on randomized algorithms. The algorithm presented in our work can solve the problem of stochastic disturbance rejection approximately but with

  17. A Robustly Stabilizing Model Predictive Control Algorithm

    Science.gov (United States)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  18. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...

  19. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations ...

  20. Performance of Predictive Equations Specifically Developed to Estimate Resting Energy Expenditure in Ventilated Critically Ill Children.

    Science.gov (United States)

    Jotterand Chaparro, Corinne; Taffé, Patrick; Moullet, Clémence; Laure Depeyre, Jocelyne; Longchamp, David; Perez, Marie-Hélène; Cotting, Jacques

    2017-05-01

    To determine, based on indirect calorimetry measurements, the biases of predictive equations specifically developed recently for estimating resting energy expenditure (REE) in ventilated critically ill children, or developed for healthy populations but used in critically ill children. A secondary analysis study was performed using our data on REE measured in a previous prospective study on protein and energy needs in pediatric intensive care unit. We included 75 ventilated critically ill children (median age, 21 months) in whom 407 indirect calorimetry measurements were performed. Fifteen predictive equations were used to estimate REE: the equations of White, Meyer, Mehta, Schofield, Henry, the World Health Organization, Fleisch, and Harris-Benedict and the tables of Talbot. Their differential and proportional biases (with 95% CIs) were computed and the bias plotted in graphs. The Bland-Altman method was also used. Most equations underestimated and overestimated REE between 200 and 1000 kcal/day. The equations of Mehta, Schofield, and Henry and the tables of Talbot had a bias ≤10%, but the 95% CI was large and contained values by far beyond ±10% for low REE values. Other specific equations for critically ill children had even wider biases. In ventilated critically ill children, none of the predictive equations tested met the performance criteria for the entire range of REE between 200 and 1000 kcal/day. Even the equations with the smallest bias may entail a risk of underfeeding or overfeeding, especially in the youngest children. Indirect calorimetry measurement must be preferred. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Improved prediction of critical heat flux in liquid metal pool boiling

    International Nuclear Information System (INIS)

    Bankoff, S.G.; Fauske, H.K.

    1974-01-01

    The Kutateladze criterion for the pool boiling critical heat flux, which works well for nonmetallic liquids at or above atmospheric pressure, fails for the alkali liquid metals in the pressure range of interest for Liquid Metal Fast Breeder Reactor applications. In this pressure range bubble growth of the alkali liquid metals is largely inertia-controlled, in view of the large thermal conductivities, which implies a significant condensing heat flux within the bubbles themselves. The bubble growth is assumed to be described by the Mikic, Rohsenow, and Griffith equation. In this way a mean bubble age is determined, and hence a mean bubble thermal boundary layer thickness. The time-average critical heat flux is then obtained as the sum of the Kutateladze flux and the flux due to condensation on the bubble surfaces. No empirical parameters are employed. The present analysis predicts critical heat fluxes lying generally within the data band. (U.S.)

  2. Predictive Control, Competitive Model Business Planning, and Innovation ERP

    DEFF Research Database (Denmark)

    Nourani, Cyrus F.; Lauth, Codrina

    2015-01-01

    New optimality principles are put forth based on competitive model business planning. A Generalized MinMax local optimum dynamic programming algorithm is presented and applied to business model computing where predictive techniques can determine local optima. Based on a systems model an enterprise...... is not viewed as the sum of its component elements, but the product of their interactions. The paper starts with introducing a systems approach to business modeling. A competitive business modeling technique, based on the author's planning techniques is applied. Systemic decisions are based on common...... organizational goals, and as such business planning and resource assignments should strive to satisfy higher organizational goals. It is critical to understand how different decisions affect and influence one another. Here, a business planning example is presented where systems thinking technique, using Causal...

  3. A critical pressure based panel method for prediction of unsteady loading of marine propellers under cavitation

    International Nuclear Information System (INIS)

    Liu, P.; Bose, N.; Colbourne, B.

    2002-01-01

    A simple numerical procedure is established and implemented into a time domain panel method to predict hydrodynamic performance of marine propellers with sheet cavitation. This paper describes the numerical formulations and procedures to construct this integration. Predicted hydrodynamic loads were compared with both a previous numerical model and experimental measurements for a propeller in steady flow. The current method gives a substantial improvement in thrust and torque coefficient prediction over a previous numerical method at low cavitation numbers of less than 2.0, where severe cavitation occurs. Predicted pressure coefficient distributions are also presented. (author)

  4. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  5. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  6. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  7. Link Prediction via Sparse Gaussian Graphical Model

    Directory of Open Access Journals (Sweden)

    Liangliang Zhang

    2016-01-01

    Full Text Available Link prediction is an important task in complex network analysis. Traditional link prediction methods are limited by network topology and lack of node property information, which makes predicting links challenging. In this study, we address link prediction using a sparse Gaussian graphical model and demonstrate its theoretical and practical effectiveness. In theory, link prediction is executed by estimating the inverse covariance matrix of samples to overcome information limits. The proposed method was evaluated with four small and four large real-world datasets. The experimental results show that the area under the curve (AUC value obtained by the proposed method improved by an average of 3% and 12.5% compared to 13 mainstream similarity methods, respectively. This method outperforms the baseline method, and the prediction accuracy is superior to mainstream methods when using only 80% of the training set. The method also provides significantly higher AUC values when using only 60% in Dolphin and Taro datasets. Furthermore, the error rate of the proposed method demonstrates superior performance with all datasets compared to mainstream methods.

  8. Critical Infrastructure Protection and Resilience Literature Survey: Modeling and Simulation

    Science.gov (United States)

    2014-11-01

    Application (NESEA); 13-14 Dec, 2012; Liverpool , UK. 16. Brandon R, Page S, Varndell J. Real-Time threat assessment for critical infrastructure protection...International Journal of Risk Assessment and Management. 2006;6(4-6):423-439. 32. Zale JJ KB. A GIS-based football stadium evacuation model

  9. Interpretive and Critical Phenomenological Crime Studies: A Model Design

    Science.gov (United States)

    Miner-Romanoff, Karen

    2012-01-01

    The critical and interpretive phenomenological approach is underutilized in the study of crime. This commentary describes this approach, guided by the question, "Why are interpretive phenomenological methods appropriate for qualitative research in criminology?" Therefore, the purpose of this paper is to describe a model of the interpretive…

  10. Critical properties of a three dimension p-spin model

    International Nuclear Information System (INIS)

    Franz, S.; Parisi, G.

    2000-03-01

    In this paper we study the critical properties of a finite dimensional generalization of the p-spin model. We find evidence that in dimension three, contrary to its mean field limit, the glass transition is associated to a diverging susceptibility (and correlation length). (author)

  11. Uncertainty modelling of critical column buckling for reinforced ...

    Indian Academy of Sciences (India)

    ances the accuracy of the structural models by using experimental results and design codes. (Baalbaki et al 1991; ... in calculation of column buckling load as defined in the following section. 4. Fuzzy logic ... material uncertainty, using the value becomes a critical solution and is a more accurate and safe method compared ...

  12. An improved mechanistic critical heat flux model for subcooled flow boiling

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Chang, Soon Heung [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    Based on the bubble coalescence adjacent to the heated wall as a flow structure for CHF condition, Chang and Lee developed a mechanistic critical heat flux (CHF) model for subcooled flow boiling. In this paper, improvements of Chang-Lee model are implemented with more solid theoretical bases for subcooled and low-quality flow boiling in tubes. Nedderman-Shearer`s equations for the skin friction factor and universal velocity profile models are employed. Slip effect of movable bubbly layer is implemented to improve the predictability of low mass flow. Also, mechanistic subcooled flow boiling model is used to predict the flow quality and void fraction. The performance of the present model is verified using the KAIST CHF database of water in uniformly heated tubes. It is found that the present model can give a satisfactory agreement with experimental data within less than 9% RMS error. 9 refs., 5 figs. (Author)

  13. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  14. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Genetic models of homosexuality: generating testable predictions

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344

  16. Critical Business Requirements Model and Metrics for Intranet ROI

    OpenAIRE

    Luqi; Jacoby, Grant A.

    2005-01-01

    Journal of Electronic Commerce Research, Vol. 6, No. 1, pp. 1-30. This research provides the first theoretical model, the Intranet Efficiency and Effectiveness Model (IEEM), to measure intranet overall value contributions based on a corporation’s critical business requirements by applying a balanced baseline of metrics and conversion ratios linked to key business processes of knowledge workers, IT managers and business decision makers -- in effect, closing the gap of understanding...

  17. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    Science.gov (United States)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  18. Current algebra of WZNW models at and away from criticality

    International Nuclear Information System (INIS)

    Abdalla, E.; Forger, M.

    1992-01-01

    In this paper, the authors derive the current algebra of principal chiral models with a Wess-Zumino term. At the critical coupling where the model becomes conformally invariant (Wess-Zumino-Novikov-Witten theory), this algebra reduces to two commuting Kac-Moody algebras, while in the limit where the coupling constant is taken to zero (ordinary chiral model), we recover the current algebra of that model. In this way, the latter is explicitly realized as a deformation of the former, with the coupling constant as the deformation parameter

  19. Self-organized Criticality Model for Ocean Internal Waves

    International Nuclear Information System (INIS)

    Wang Gang; Hou Yijun; Lin Min; Qiao Fangli

    2009-01-01

    In this paper, we present a simple spring-block model for ocean internal waves based on the self-organized criticality (SOC). The oscillations of the water blocks in the model display power-law behavior with an exponent of -2 in the frequency domain, which is similar to the current and sea water temperature spectra in the actual ocean and the universal Garrett and Munk deep ocean internal wave model [Geophysical Fluid Dynamics 2 (1972) 225; J. Geophys. Res. 80 (1975) 291]. The influence of the ratio of the driving force to the spring coefficient to SOC behaviors in the model is also discussed. (general)

  20. Modelling decremental ramps using 2- and 3-parameter "critical power" models.

    Science.gov (United States)

    Morton, R Hugh; Billat, Veronique

    2013-01-01

    The "Critical Power" (CP) model of human bioenergetics provides a valuable way to identify both limits of tolerance to exercise and mechanisms that underpin that tolerance. It applies principally to cycling-based exercise, but with suitable adjustments for analogous units it can be applied to other exercise modalities; in particular to incremental ramp exercise. It has not yet been applied to decremental ramps which put heavy early demand on the anaerobic energy supply system. This paper details cycling-based bioenergetics of decremental ramps using 2- and 3-parameter CP models. It derives equations that, for an individual of known CP model parameters, define those combinations of starting intensity and decremental gradient which will or will not lead to exhaustion before ramping to zero; and equations that predict time to exhaustion on those decremental ramps that will. These are further detailed with suitably chosen numerical and graphical illustrations. These equations can be used for parameter estimation from collected data, or to make predictions when parameters are known.

  1. Procalcitonin Clearance for Early Prediction of Survival in Critically Ill Patients with Severe Sepsis

    Directory of Open Access Journals (Sweden)

    Mohd Basri Mat Nor

    2014-01-01

    Full Text Available Introduction. Serum procalcitonin (PCT diagnosed sepsis in critically ill patients; however, its prediction for survival is not well established. We evaluated the prognostic value of dynamic changes of PCT in sepsis patients. Methods. A prospective observational study was conducted in adult ICU. Patients with systemic inflammatory response syndrome (SIRS were recruited. Daily PCT were measured for 3 days. 48 h PCT clearance (PCTc-48 was defined as percentage of baseline PCT minus 48 h PCT over baseline PCT. Results. 95 SIRS patients were enrolled (67 sepsis and 28 noninfectious SIRS. 40% patients in the sepsis group died in hospital. Day 1-PCT was associated with diagnosis of sepsis (AUC 0.65 (95% CI, 0.55 to 0.76 but was not predictive of mortality. In sepsis patients, PCTc-48 was associated with prediction of survival (AUC 0.69 (95% CI, 0.53 to 0.84. Patients with PCTc-48 > 30% were independently associated with survival (HR 2.90 (95% CI 1.22 to 6.90. Conclusions. PCTc-48 is associated with prediction of survival in critically ill patients with sepsis. This could assist clinicians in risk stratification; however, the small sample size, and a single-centre study, may limit the generalisability of the finding. This would benefit from replication in future multicentre study.

  2. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  3. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  4. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to

  5. A formal approach for the prediction of the critical heat flux in subcooled water

    International Nuclear Information System (INIS)

    Lombardi, C.

    1995-01-01

    The critical heat flux (CHF) in subcooled water at high mass fluxes are not yet satisfactory correlated. For this scope a formal approach is here followed, which is based on an extension of the parameters and the correlation used for the dryout prediction for medium high quality mixtures. The obtained correlation, in spite of its simplicity and its explicit form, yields satisfactory predictions, also when applied to more conventional CHF data at low-medium mass fluxes and high pressures. Further improvements are possible, if a more complete data bank will be available. The main and general open item is the definition of a criterion, depending only on independent parameters, such as mass flux, pressure, inlet subcooling and geometry, to predict whether the heat transfer crisis will result as a DNB or a dryout phenomenon

  6. A formal approach for the prediction of the critical heat flux in subcooled water

    Energy Technology Data Exchange (ETDEWEB)

    Lombardi, C. [Polytechnic of Milan (Italy)

    1995-09-01

    The critical heat flux (CHF) in subcooled water at high mass fluxes are not yet satisfactory correlated. For this scope a formal approach is here followed, which is based on an extension of the parameters and the correlation used for the dryout prediction for medium high quality mixtures. The obtained correlation, in spite of its simplicity and its explicit form, yields satisfactory predictions, also when applied to more conventional CHF data at low-medium mass fluxes and high pressures. Further improvements are possible, if a more complete data bank will be available. The main and general open item is the definition of a criterion, depending only on independent parameters, such as mass flux, pressure, inlet subcooling and geometry, to predict whether the heat transfer crisis will result as a DNB or a dryout phenomenon.

  7. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  8. A 3-D CFD approach to the mechanistic prediction of forced convective critical heat flux at low quality

    International Nuclear Information System (INIS)

    Jean-Marie Le Corre; Cristina H Amon; Shi-Chune Yao

    2005-01-01

    Full text of publication follows: The prediction of the Critical Heat Flux (CHF) in a heat flux controlled boiling heat exchanger is important to assess the maximal thermal capability of the system. In the case of a nuclear reactor, CHF margin gain (using improved mixing vane grid design, for instance) can allow power up-rate and enhanced operating flexibility. In general, current nuclear core design procedures use quasi-1D approach to model the coolant thermal-hydraulic conditions within the fuel bundles coupled with fully empirical CHF prediction methods. In addition, several CHF mechanistic models have been developed in the past and coupled with 1D and quasi-1D thermal-hydraulic codes. These mechanistic models have demonstrated reasonable CHF prediction characteristics and, more remarkably, correct parametric trends over wide range of fluid conditions. However, since the phenomena leading to CHF are localized near the heater, models are needed to relate local quantities of interest to area-averaged quantities. As a consequence, large CHF prediction uncertainties may be introduced and 3D fluid characteristics (such as swirling flow) cannot be accounted properly. Therefore, a fully mechanistic approach to CHF prediction is, in general, not possible using the current approach. The development of CHF-enhanced fuel assembly designs requires the use of more advanced 3D coolant properties computations coupled with a CHF mechanistic modeling. In the present work, the commercial CFD code CFX-5 is used to compute 3D coolant conditions in a vertical heated tube with upward flow. Several CHF mechanistic models at low quality available in the literature are coupled with the CFD code by developing adequate models between local coolant properties and local parameters of interest to predict CHF. The prediction performances of these models are assessed using CHF databases available in the open literature and the 1995 CHF look-up table. Since CFD can reasonably capture 3D fluid

  9. The critical domain size of stochastic population models.

    Science.gov (United States)

    Reimer, Jody R; Bonsall, Michael B; Maini, Philip K

    2017-02-01

    Identifying the critical domain size necessary for a population to persist is an important question in ecology. Both demographic and environmental stochasticity impact a population's ability to persist. Here we explore ways of including this variability. We study populations with distinct dispersal and sedentary stages, which have traditionally been modelled using a deterministic integrodifference equation (IDE) framework. Individual-based models (IBMs) are the most intuitive stochastic analogues to IDEs but yield few analytic insights. We explore two alternate approaches; one is a scaling up to the population level using the Central Limit Theorem, and the other a variation on both Galton-Watson branching processes and branching processes in random environments. These branching process models closely approximate the IBM and yield insight into the factors determining the critical domain size for a given population subject to stochasticity.

  10. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    Science.gov (United States)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-01-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  11. Predictive Models for Carcinogenicity and Mutagenicity ...

    Science.gov (United States)

    Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t

  12. Critical heat flux prediction by using radial basis function and multilayer perceptron neural networks: A comparison study

    International Nuclear Information System (INIS)

    Vaziri, Nima; Hojabri, Alireza; Erfani, Ali; Monsefi, Mehrdad; Nilforooshan, Behnam

    2007-01-01

    Critical heat flux (CHF) is an important parameter for the design of nuclear reactors. Although many experimental and theoretical researches have been performed, there is not a single correlation to predict CHF because it is influenced by many parameters. These parameters are based on fixed inlet, local and fixed outlet conditions. Artificial neural networks (ANNs) have been applied to a wide variety of different areas such as prediction, approximation, modeling and classification. In this study, two types of neural networks, radial basis function (RBF) and multilayer perceptron (MLP), are trained with the experimental CHF data and their performances are compared. RBF predicts CHF with root mean square (RMS) errors of 0.24%, 7.9%, 0.16% and MLP predicts CHF with RMS errors of 1.29%, 8.31% and 2.71%, in fixed inlet conditions, local conditions and fixed outlet conditions, respectively. The results show that neural networks with RBF structure have superior performance in CHF data prediction over MLP neural networks. The parametric trends of CHF obtained by the trained ANNs are also evaluated and results reported

  13. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  14. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  15. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  16. General correlation for prediction of critical heat flux ratio in water cooled channels

    Energy Technology Data Exchange (ETDEWEB)

    Pernica, R.; Cizek, J.

    1995-09-01

    The paper present the general empirical Critical Heat Flux Ration (CHFR) correlation which is valid for vertical water upflow through tubes, internally heated concentric annuli and rod bundles geometries with both wide and very tight square and triangular rods lattices. The proposed general PG correlation directly predicts the CHFR, it comprises axial and radial non-uniform heating, and is valid in a wider range of thermal hydraulic conditions than previously published critical heat flux correlations. The PG correlation has been developed using the critical heat flux Czech data bank which includes more than 9500 experimental data on tubes, 7600 data on rod bundles and 713 data on internally heated concentric annuli. Accuracy of the CHFR prediction, statistically assessed by the constant dryout conditions approach, is characterized by the mean value nearing 1.00 and the standard deviation less than 0.06. Moverover, a subchannel form of the PG correlations is statistically verified on Westinghouse and Combustion Engineering rod bundle data bases, i.e. more than 7000 experimental CHF points of Columbia University data bank were used.

  17. A Quasispecies Continuous Contact Model in a Critical Regime

    Science.gov (United States)

    Kondratiev, Yuri; Pirogov, Sergey; Zhizhina, Elena

    2016-04-01

    We study a new non-equilibrium dynamical model: a marked continuous contact model in d-dimensional space (d ge 3). We prove that for certain values of rates (the critical regime) this system has the one-parameter family of invariant measures labelled by the spatial density of particles. Then we prove that the process starting from the marked Poisson measure converges to one of these invariant measures. In contrast with the continuous contact model studied earlier in Kondratiev (Infin Dimens Anal Quantum Probab Relat Top 11(2):231-258, 2008), now the spatial particle density is not a conserved quantity.

  18. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  19. Predictive modelling of evidence informed teaching

    OpenAIRE

    Zhang, Dell; Brown, C.

    2017-01-01

    In this paper, we analyse the questionnaire survey data collected from 79 English primary schools about the situation of evidence informed teaching, where the evidences could come from research journals or conferences. Specifically, we build a predictive model to see what external factors could help to close the gap between teachers’ belief and behaviour in evidence informed teaching, which is the first of its kind to our knowledge. The major challenge, from the data mining perspective, is th...

  20. A Predictive Model for Cognitive Radio

    Science.gov (United States)

    2006-09-14

    response in a given situation. Vadde et al. interest and produce a model for prediction of the response. have applied response surface methodology and...34 2000. [3] K. K. Vadde and V. R. Syrotiuk, "Factor interaction on service configurations to those that best meet our communication delivery in mobile ad...resulting set of configurations randomly or apply additional 2004. screening criteria. [4] K. K. Vadde , M.-V. R. Syrotiuk, and D. C. Montgomery

  1. Tectonic predictions with mantle convection models

    Science.gov (United States)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough

  2. Modern statistical models for forensic fingerprint examinations: a critical review.

    Science.gov (United States)

    Abraham, Joshua; Champod, Christophe; Lennard, Chris; Roux, Claude

    2013-10-10

    Over the last decade, the development of statistical models in support of forensic fingerprint identification has been the subject of increasing research attention, spurned on recently by commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. Such models are increasingly seen as useful tools in support of the fingerprint identification process within or in addition to the ACE-V framework. This paper provides a critical review of recent statistical models from both a practical and theoretical perspective. This includes analysis of models of two different methodologies: Probability of Random Correspondence (PRC) models that focus on calculating probabilities of the occurrence of fingerprint configurations for a given population, and Likelihood Ratio (LR) models which use analysis of corresponding features of fingerprints to derive a likelihood value representing the evidential weighting for a potential source. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Toward Developing Genetic Algorithms to Aid in Critical Infrastructure Modeling

    Energy Technology Data Exchange (ETDEWEB)

    2007-05-01

    Today’s society relies upon an array of complex national and international infrastructure networks such as transportation, telecommunication, financial and energy. Understanding these interdependencies is necessary in order to protect our critical infrastructure. The Critical Infrastructure Modeling System, CIMS©, examines the interrelationships between infrastructure networks. CIMS© development is sponsored by the National Security Division at the Idaho National Laboratory (INL) in its ongoing mission for providing critical infrastructure protection and preparedness. A genetic algorithm (GA) is an optimization technique based on Darwin’s theory of evolution. A GA can be coupled with CIMS© to search for optimum ways to protect infrastructure assets. This includes identifying optimum assets to enforce or protect, testing the addition of or change to infrastructure before implementation, or finding the optimum response to an emergency for response planning. This paper describes the addition of a GA to infrastructure modeling for infrastructure planning. It first introduces the CIMS© infrastructure modeling software used as the modeling engine to support the GA. Next, the GA techniques and parameters are defined. Then a test scenario illustrates the integration with CIMS© and the preliminary results.

  4. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert F.; Knox, James C.

    2016-01-01

    As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  5. A critical re-evaluation of the prediction of alkalinity and base cation chemistry from BGS sediment composition data.

    Science.gov (United States)

    Begum, S; McClean, C J; Cresser, M S; Adnan, M; Breward, N

    2014-06-01

    The model of Begum et al. (2010) that predicts alkalinity and Ca and Mg concentrations in river water from available sediment composition data has been critically re-evaluated using an independent validation data set. The results support the hypothesis that readily available stream water sediment elemental composition data are useful for prediction of mean and minimum concentrations of alkalinity and Ca and Mg in river water throughout the River Derwent catchment in North Yorkshire without requiring land-use data inputs as stream water sediment composition reflects all aspects of the riparian zone soil system, including land-use. However, it was shown for alkalinity prediction that rainfall exerts a significant dilution effect and should be incorporated into the model in addition to flow path-weighted sediments Ca% and Mg%. The results also strongly suggest that in catchments with substantial rough moorland land-use neutralization of organic acids consumes alkalinity and this fact should be considered in any future development of the model. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Prediction of chronic critical illness in a general intensive care unit.

    Science.gov (United States)

    Loss, Sérgio H; Marchese, Cláudia B; Boniatti, Márcio M; Wawrzeniak, Iuri C; Oliveira, Roselaine P; Nunes, Luciana N; Victorino, Josué A

    2013-01-01

    To assess the incidence, costs, and mortality associated with chronic critical illness (CCI), and to identify clinical predictors of CCI in a general intensive care unit. This was a prospective observational cohort study. All patients receiving supportive treatment for over 20 days were considered chronically critically ill and eligible for the study. After applying the exclusion criteria, 453 patients were analyzed. There was an 11% incidence of CCI. Total length of hospital stay, costs, and mortality were significantly higher among patients with CCI. Mechanical ventilation, sepsis, Glasgow score intensive care units with higher mortality, costs, and prolonged hospitalization. Factors identifiable at the time of admission or during the first week in the intensive care unit can be used to predict CCI. Copyright © 2013 Elsevier Editora Ltda. All rights reserved.

  7. Establishing Decision Trees for Predicting Successful Postpyloric Nasoenteric Tube Placement in Critically Ill Patients.

    Science.gov (United States)

    Chen, Weisheng; Sun, Cheng; Wei, Ru; Zhang, Yanlin; Ye, Heng; Chi, Ruibin; Zhang, Yichen; Hu, Bei; Lv, Bo; Chen, Lifang; Zhang, Xiunong; Lan, Huilan; Chen, Chunbo

    2016-08-31

    Despite the use of prokinetic agents, the overall success rate for postpyloric placement via a self-propelled spiral nasoenteric tube is quite low. This retrospective study was conducted in the intensive care units of 11 university hospitals from 2006 to 2016 among adult patients who underwent self-propelled spiral nasoenteric tube insertion. Success was defined as postpyloric nasoenteric tube placement confirmed by abdominal x-ray scan 24 hours after tube insertion. Chi-square automatic interaction detection (CHAID), simple classification and regression trees (SimpleCart), and J48 methodologies were used to develop decision tree models, and multiple logistic regression (LR) methodology was used to develop an LR model for predicting successful postpyloric nasoenteric tube placement. The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance of these models. Successful postpyloric nasoenteric tube placement was confirmed in 427 of 939 patients enrolled. For predicting successful postpyloric nasoenteric tube placement, the performance of the 3 decision trees was similar in terms of the AUCs: 0.715 for the CHAID model, 0.682 for the SimpleCart model, and 0.671 for the J48 model. The AUC of the LR model was 0.729, which outperformed the J48 model. Both the CHAID and LR models achieved an acceptable discrimination for predicting successful postpyloric nasoenteric tube placement and were useful for intensivists in the setting of self-propelled spiral nasoenteric tube insertion. © 2016 American Society for Parenteral and Enteral Nutrition.

  8. Establishing Decision Trees for Predicting Successful Postpyloric Nasoenteric Tube Placement in Critically Ill Patients.

    Science.gov (United States)

    Chen, Weisheng; Sun, Cheng; Wei, Ru; Zhang, Yanlin; Ye, Heng; Chi, Ruibin; Zhang, Yichen; Hu, Bei; Lv, Bo; Chen, Lifang; Zhang, Xiunong; Lan, Huilan; Chen, Chunbo

    2018-01-01

    Despite the use of prokinetic agents, the overall success rate for postpyloric placement via a self-propelled spiral nasoenteric tube is quite low. This retrospective study was conducted in the intensive care units of 11 university hospitals from 2006 to 2016 among adult patients who underwent self-propelled spiral nasoenteric tube insertion. Success was defined as postpyloric nasoenteric tube placement confirmed by abdominal x-ray scan 24 hours after tube insertion. Chi-square automatic interaction detection (CHAID), simple classification and regression trees (SimpleCart), and J48 methodologies were used to develop decision tree models, and multiple logistic regression (LR) methodology was used to develop an LR model for predicting successful postpyloric nasoenteric tube placement. The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance of these models. Successful postpyloric nasoenteric tube placement was confirmed in 427 of 939 patients enrolled. For predicting successful postpyloric nasoenteric tube placement, the performance of the 3 decision trees was similar in terms of the AUCs: 0.715 for the CHAID model, 0.682 for the SimpleCart model, and 0.671 for the J48 model. The AUC of the LR model was 0.729, which outperformed the J48 model. Both the CHAID and LR models achieved an acceptable discrimination for predicting successful postpyloric nasoenteric tube placement and were useful for intensivists in the setting of self-propelled spiral nasoenteric tube insertion. © 2016 American Society for Parenteral and Enteral Nutrition.

  9. Critical exponents for the Reggeon quantum spin model

    International Nuclear Information System (INIS)

    Brower, R.C.; Furman, M.A.

    1978-01-01

    The Reggeon quantum spin (RQS) model on the transverse lattice in D dimensional impact parameter space has been conjectured to have the same critical behaviour as the Reggeon field theory (RFT). Thus from a high 'temperature' series of ten (D=2) and twenty (D=1) terms for the RQS model the authors extrapolate to the critical temperature T=Tsub(c) by Pade approximants to obtain the exponents eta=0.238 +- 0.008, z=1.16 +- 0.01, γ=1.271 +- 0.007 for D=2 and eta=0.317 +- 0.002, z=1.272 +- 0.007, γ=1.736 +- 0.001, lambda=0.57 +- 0.03 for D=1. These exponents naturally interpolate between the D=0 and D=4-epsilon results for RFT as expected on the basis of the universality conjecture. (Auth.)

  10. A Comprehensive Assessment Model for Critical Infrastructure Protection

    Directory of Open Access Journals (Sweden)

    Häyhtiö Markus

    2017-12-01

    Full Text Available International business demands seamless service and IT-infrastructure throughout the entire supply chain. However, dependencies between different parts of this vulnerable ecosystem form a fragile web. Assessment of the financial effects of any abnormalities in any part of the network is demanded in order to protect this network in a financially viable way. Contractual environment between the actors in a supply chain, different business domains and functions requires a management model, which enables a network wide protection for critical infrastructure. In this paper authors introduce such a model. It can be used to assess financial differences between centralized and decentralized protection of critical infrastructure. As an end result of this assessment business resilience to unknown threats can be improved across the entire supply chain.

  11. Predictive model for ice formation on superhydrophobic surfaces.

    Science.gov (United States)

    Bahadur, Vaibhav; Mishchenko, Lidiya; Hatton, Benjamin; Taylor, J Ashley; Aizenberg, Joanna; Krupenkin, Tom

    2011-12-06

    The prevention and control of ice accumulation has important applications in aviation, building construction, and energy conversion devices. One area of active research concerns the use of superhydrophobic surfaces for preventing ice formation. The present work develops a physics-based modeling framework to predict ice formation on cooled superhydrophobic surfaces resulting from the impact of supercooled water droplets. This modeling approach analyzes the multiple phenomena influencing ice formation on superhydrophobic surfaces through the development of submodels describing droplet impact dynamics, heat transfer, and heterogeneous ice nucleation. These models are then integrated together to achieve a comprehensive understanding of ice formation upon impact of liquid droplets at freezing conditions. The accuracy of this model is validated by its successful prediction of the experimental findings that demonstrate that superhydrophobic surfaces can fully prevent the freezing of impacting water droplets down to surface temperatures of as low as -20 to -25 °C. The model can be used to study the influence of surface morphology, surface chemistry, and fluid and thermal properties on dynamic ice formation and identify parameters critical to achieving icephobic surfaces. The framework of the present work is the first detailed modeling tool developed for the design and analysis of surfaces for various ice prevention/reduction strategies. © 2011 American Chemical Society

  12. Predictive Modeling by the Cerebellum Improves Proprioception

    Science.gov (United States)

    Bhanpuri, Nasir H.; Okamura, Allison M.

    2013-01-01

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance. PMID:24005283

  13. Genus-two correlators for critical Ising model

    International Nuclear Information System (INIS)

    Behera, N.; Malik, R.P.; Kaul, R.K.

    1989-01-01

    The characters and one- and two-point correlators for the critical Ising model on a genus-two Riemann surface have been obtained using their modular transformation properties and the factorization properties in the pinching limit of the zero-homology cycle of the surface. The procedure can easily be generalized to higher-genus Riemann surfaces and also to other rational conformal field theories

  14. The IntFOLD server: an integrated web resource for protein fold recognition, 3D model quality assessment, intrinsic disorder prediction, domain prediction and ligand binding site prediction.

    Science.gov (United States)

    Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J

    2011-07-01

    The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.

  15. Critical Source Area Delineation: The representation of hydrology in effective erosion modeling.

    Science.gov (United States)

    Fowler, A.; Boll, J.; Brooks, E. S.; Boylan, R. D.

    2017-12-01

    Despite decades of conservation and millions of conservation dollars, nonpoint source sediment loading associated with agricultural disturbance continues to be a significant problem in many parts of the world. Local and national conservation organizations are interested in targeting critical source areas for control strategy implementation. Currently, conservation practices are selected and located based on the Revised Universal Soil Loss Equation (RUSLE) hillslope erosion modeling, and the National Resource Conservation Service will soon be transiting to the Watershed Erosion Predict Project (WEPP) model for the same purpose. We present an assessment of critical source areas targeted with RUSLE, WEPP and a regionally validated hydrology model, the Soil Moisture Routing (SMR) model, to compare the location of critical areas for sediment loading and the effectiveness of control strategies. The three models are compared for the Palouse dryland cropping region of the inland northwest, with un-calibrated analyses of the Kamiache watershed using publicly available soils, land-use and long-term simulated climate data. Critical source areas were mapped and the side-by-side comparison exposes the differences in the location and timing of runoff and erosion predictions. RUSLE results appear most sensitive to slope driving processes associated with infiltration excess. SMR captured saturation excess driven runoff events located at the toe slope position, while WEPP was able to capture both infiltration excess and saturation excess processes depending on soil type and management. A methodology is presented for down-scaling basin level screening to the hillslope management scale for local control strategies. Information on the location of runoff and erosion, driven by the runoff mechanism, is critical for effective treatment and conservation.

  16. Critical properties of the Kitaev-Heisenberg Model

    Science.gov (United States)

    Sizyuk, Yuriy; Price, Craig; Perkins, Natalia

    2013-03-01

    Collective behavior of local moments in Mott insulators in the presence of strong spin-orbit coupling is one of the most interesting questions in modern condensed matter physics. Here we study the finite temperature properties of the Kitaev-Heisenberg model which describe the interactions between the pseudospin J = 1 / 2 iridium moments on the honeycomb lattice. This model was suggested as a possible model to explain low-energy physics of AIr2O3 compounds. In our study we show that the Kitaev-Heisenberg model may be mapped into the six state clock model with an intermediate power-law phase at finite temperatures. In the framework of the Ginsburg-Landau theory, we provide an analysis of the critical properties of the finite-temperature ordering transitions. NSF grant DMR-1005932

  17. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  18. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  19. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  20. Evaluating predictive models of software quality

    International Nuclear Information System (INIS)

    Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D

    2014-01-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  1. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  2. PREDICTION MODELS OF GRAIN YIELD AND CHARACTERIZATION

    Directory of Open Access Journals (Sweden)

    Narciso Ysac Avila Serrano

    2009-06-01

    Full Text Available With the objective to characterize the grain yield of five cowpea cultivars and to find linear regression models to predict it, a study was developed in La Paz, Baja California Sur, Mexico. A complete randomized blocks design was used. Simple and multivariate analyses of variance were carried out using the canonical variables to characterize the cultivars. The variables cluster per plant, pods per plant, pods per cluster, seeds weight per plant, seeds hectoliter weight, 100-seed weight, seeds length, seeds wide, seeds thickness, pods length, pods wide, pods weight, seeds per pods, and seeds weight per pods, showed significant differences (P≤ 0.05 among cultivars. Paceño and IT90K-277-2 cultivars showed the higher seeds weight per plant. The linear regression models showed correlation coefficients ≥0.92. In these models, the seeds weight per plant, pods per cluster, pods per plant, cluster per plant and pods length showed significant correlations (P≤ 0.05. In conclusion, the results showed that grain yield differ among cultivars and for its estimation, the prediction models showed determination coefficients highly dependable.

  3. Predictive Models for Normal Fetal Cardiac Structures.

    Science.gov (United States)

    Krishnan, Anita; Pike, Jodi I; McCarter, Robert; Fulgium, Amanda L; Wilson, Emmanuel; Donofrio, Mary T; Sable, Craig A

    2016-12-01

    Clinicians rely on age- and size-specific measures of cardiac structures to diagnose cardiac disease. No universally accepted normative data exist for fetal cardiac structures, and most fetal cardiac centers do not use the same standards. The aim of this study was to derive predictive models for Z scores for 13 commonly evaluated fetal cardiac structures using a large heterogeneous population of fetuses without structural cardiac defects. The study used archived normal fetal echocardiograms in representative fetuses aged 12 to 39 weeks. Thirteen cardiac dimensions were remeasured by a blinded echocardiographer from digitally stored clips. Studies with inadequate imaging views were excluded. Regression models were developed to relate each dimension to estimated gestational age (EGA) by dates, biparietal diameter, femur length, and estimated fetal weight by the Hadlock formula. Dimension outcomes were transformed (e.g., using the logarithm or square root) as necessary to meet the normality assumption. Higher order terms, quadratic or cubic, were added as needed to improve model fit. Information criteria and adjusted R 2 values were used to guide final model selection. Each Z-score equation is based on measurements derived from 296 to 414 unique fetuses. EGA yielded the best predictive model for the majority of dimensions; adjusted R 2 values ranged from 0.72 to 0.893. However, each of the other highly correlated (r > 0.94) biometric parameters was an acceptable surrogate for EGA. In most cases, the best fitting model included squared and cubic terms to introduce curvilinearity. For each dimension, models based on EGA provided the best fit for determining normal measurements of fetal cardiac structures. Nevertheless, other biometric parameters, including femur length, biparietal diameter, and estimated fetal weight provided results that were nearly as good. Comprehensive Z-score results are available on the basis of highly predictive models derived from gestational

  4. A support vector machine tool for adaptive tomotherapy treatments: Prediction of head and neck patients criticalities.

    Science.gov (United States)

    Guidi, Gabriele; Maffei, Nicola; Vecchi, Claudio; Ciarmatori, Alberto; Mistretta, Grazia Maria; Gottardi, Giovanni; Meduri, Bruno; Baldazzi, Giuseppe; Bertoni, Filippo; Costi, Tiziana

    2015-07-01

    Adaptive radiation therapy (ART) is an advanced field of radiation oncology. Image-guided radiation therapy (IGRT) methods can support daily setup and assess anatomical variations during therapy, which could prevent incorrect dose distribution and unexpected toxicities. A re-planning to correct these anatomical variations should be done daily/weekly, but to be applicable to a large number of patients, still require time consumption and resources. Using unsupervised machine learning on retrospective data, we have developed a predictive network, to identify patients that would benefit of a re-planning. 1200 MVCT of 40 head and neck (H&N) cases were re-contoured, automatically, using deformable hybrid registration and structures mapping. Deformable algorithm and MATLAB(®) homemade machine learning process, developed, allow prediction of criticalities for Tomotherapy treatments. Using retrospective analysis of H&N treatments, we have investigated and predicted tumor shrinkage and organ at risk (OAR) deformations. Support vector machine (SVM) and cluster analysis have identified cases or treatment sessions with potential criticalities, based on dose and volume discrepancies between fractions. During 1st weeks of treatment, 84% of patients shown an output comparable to average standard radiation treatment behavior. Starting from the 4th week, significant morpho-dosimetric changes affect 77% of patients, suggesting need for re-planning. The comparison of treatment delivered and ART simulation was carried out with receiver operating characteristic (ROC) curves, showing monotonous increase of ROC area. Warping methods, supported by daily image analysis and predictive tools, can improve personalization and monitoring of each treatment, thereby minimizing anatomic and dosimetric divergences from initial constraints. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. Using plural modeling for predicting decisions made by adaptive adversaries

    International Nuclear Information System (INIS)

    Buede, Dennis M.; Mahoney, Suzanne; Ezell, Barry; Lathrop, John

    2012-01-01

    Incorporating an appropriate representation of the likelihood of terrorist decision outcomes into risk assessments associated with weapons of mass destruction attacks has been a significant problem for countries around the world. Developing these likelihoods gets at the heart of the most difficult predictive problems: human decision making, adaptive adversaries, and adversaries about which very little is known. A plural modeling approach is proposed that incorporates estimates of all critical uncertainties: who is the adversary and what skills and resources are available to him, what information is known to the adversary and what perceptions of the important facts are held by this group or individual, what does the adversary know about the countermeasure actions taken by the government in question, what are the adversary's objectives and the priorities of those objectives, what would trigger the adversary to start an attack and what kind of success does the adversary desire, how realistic is the adversary in estimating the success of an attack, how does the adversary make a decision and what type of model best predicts this decision-making process. A computational framework is defined to aggregate the predictions from a suite of models, based on this broad array of uncertainties. A validation approach is described that deals with a significant scarcity of data.

  6. Model-based automated testing of critical PLC programs.

    CERN Document Server

    Fernández Adiego, B; Tournier, J-C; González Suárez, V M; Bliudze, S

    2014-01-01

    Testing of critical PLC (Programmable Logic Controller) programs remains a challenging task for control system engineers as it can rarely be automated. This paper proposes a model based approach which uses the BIP (Behavior, Interactions and Priorities) framework to perform automated testing of PLC programs developed with the UNICOS (UNified Industrial COntrol System) framework. This paper defines the translation procedure and rules from UNICOS to BIP which can be fully automated in order to hide the complexity of the underlying model from the control engineers. The approach is illustrated and validated through the study of a water treatment process.

  7. An analytical model for climatic predictions

    International Nuclear Information System (INIS)

    Njau, E.C.

    1990-12-01

    A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs

  8. An Anisotropic Hardening Model for Springback Prediction

    International Nuclear Information System (INIS)

    Zeng, Danielle; Xia, Z. Cedric

    2005-01-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test

  9. Prediction of critical heat flux in fuel assemblies using a CHF table method

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Tae Hyun; Hwang, Dae Hyun; Bang, Je Geon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Baek, Won Pil; Chang, Soon Heung [Korea Advance Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    A CHF table method has been assessed in this study for rod bundle CHF predictions. At the conceptual design stage for a new reactor, a general critical heat flux (CHF) prediction method with a wide applicable range and reasonable accuracy is essential to the thermal-hydraulic design and safety analysis. In many aspects, a CHF table method (i.e., the use of a round tube CHF table with appropriate bundle correction factors) can be a promising way to fulfill this need. So the assessment of the CHF table method has been performed with the bundle CHF data relevant to pressurized water reactors (PWRs). For comparison purposes, W-3R and EPRI-1 were also applied to the same data base. Data analysis has been conducted with the subchannel code COBRA-IV-I. The CHF table method shows the best predictions based on the direct substitution method. Improvements of the bundle correction factors, especially for the spacer grid and cold wall effects, are desirable for better predictions. Though the present assessment is somewhat limited in both fuel geometries and operating conditions, the CHF table method clearly shows potential to be a general CHF predictor. 8 refs., 3 figs., 3 tabs. (Author)

  10. CRITICAL CURVES AND CAUSTICS OF TRIPLE-LENS MODELS

    International Nuclear Information System (INIS)

    Daněk, Kamil; Heyrovský, David

    2015-01-01

    Among the 25 planetary systems detected up to now by gravitational microlensing, there are two cases of a star with two planets, and two cases of a binary star with a planet. Other, yet undetected types of triple lenses include triple stars or stars with a planet with a moon. The analysis and interpretation of such events is hindered by the lack of understanding of essential characteristics of triple lenses, such as their critical curves and caustics. We present here analytical and numerical methods for mapping the critical-curve topology and caustic cusp number in the parameter space of n-point-mass lenses. We apply the methods to the analysis of four symmetric triple-lens models, and obtain altogether 9 different critical-curve topologies and 32 caustic structures. While these results include various generic types, they represent just a subset of all possible triple-lens critical curves and caustics. Using the analyzed models, we demonstrate interesting features of triple lenses that do not occur in two-point-mass lenses. We show an example of a lens that cannot be described by the Chang–Refsdal model in the wide limit. In the close limit we demonstrate unusual structures of primary and secondary caustic loops, and explain the conditions for their occurrence. In the planetary limit we find that the presence of a planet may lead to a whole sequence of additional caustic metamorphoses. We show that a pair of planets may change the structure of the primary caustic even when placed far from their resonant position at the Einstein radius

  11. Prediction models for drug-induced hepatotoxicity by using weighted molecular fingerprints

    OpenAIRE

    Kim, Eunyoung; Nam, Hojung

    2017-01-01

    Background Drug-induced liver injury (DILI) is a critical issue in drug development because DILI causes failures in clinical trials and the withdrawal of approved drugs from the market. There have been many attempts to predict the risk of DILI based on in vivo and in silico identification of hepatotoxic compounds. In the current study, we propose the in silico prediction model predicting DILI using weighted molecular fingerprints. Results In this study, we used 881 bits of molecular fingerpri...

  12. Web tools for predictive toxicology model building.

    Science.gov (United States)

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  13. [Endometrial cancer: Predictive models and clinical impact].

    Science.gov (United States)

    Bendifallah, Sofiane; Ballester, Marcos; Daraï, Emile

    2017-12-01

    In France, in 2015, endometrial cancer (CE) is the first gynecological cancer in terms of incidence and the fourth cause of cancer of the woman. About 8151 new cases and nearly 2179 deaths have been reported. Treatments (surgery, external radiotherapy, brachytherapy and chemotherapy) are currently delivered on the basis of an estimation of the recurrence risk, an estimation of lymph node metastasis or an estimate of survival probability. This risk is determined on the basis of prognostic factors (clinical, histological, imaging, biological) taken alone or grouped together in the form of classification systems, which are currently insufficient to account for the evolutionary and prognostic heterogeneity of endometrial cancer. For endometrial cancer, the concept of mathematical modeling and its application to prediction have developed in recent years. These biomathematical tools have opened a new era of care oriented towards the promotion of targeted therapies and personalized treatments. Many predictive models have been published to estimate the risk of recurrence and lymph node metastasis, but a tiny fraction of them is sufficiently relevant and of clinical utility. The optimization tracks are multiple and varied, suggesting the possibility in the near future of a place for these mathematical models. The development of high-throughput genomics is likely to offer a more detailed molecular characterization of the disease and its heterogeneity. Copyright © 2017 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  14. Lyapunov exponent and criticality in the Hamiltonian mean field model

    Science.gov (United States)

    Filho, L. H. Miranda; Amato, M. A.; Rocha Filho, T. M.

    2018-03-01

    We investigate the dependence of the largest Lyapunov exponent (LLE) of an N-particle self-gravitating ring model at equilibrium with respect to the number of particles and its dependence on energy. This model has a continuous phase-transition from a ferromagnetic to homogeneous phase, and we numerically confirm with large scale simulations the existence of a critical exponent associated to the LLE, although at variance with the theoretical estimate. The existence of strong chaos in the magnetized state evidenced by a positive Lyapunov exponent is explained by the coupling of individual particle oscillations to the diffusive motion of the center of mass of the system and also results in a change of the scaling of the LLE with the number of particles. We also discuss thoroughly for the model the validity and limits of the approximations made by a geometrical model for their analytic estimate.

  15. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  16. Predictions of models for environmental radiological assessment

    International Nuclear Information System (INIS)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa; Mahler, Claudio Fernando

    2011-01-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for 137 Cs and 60 Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  17. Brittle Creep Failure, Critical Behavior, and Time-to-Failure Prediction of Concrete under Uniaxial Compression

    Directory of Open Access Journals (Sweden)

    Yingchong Wang

    2015-01-01

    Full Text Available Understanding the time-dependent brittle deformation behavior of concrete as a main building material is fundamental for the lifetime prediction and engineering design. Herein, we present the experimental measures of brittle creep failure, critical behavior, and the dependence of time-to-failure, on the secondary creep rate of concrete under sustained uniaxial compression. A complete evolution process of creep failure is achieved. Three typical creep stages are observed, including the primary (decelerating, secondary (steady state creep regime, and tertiary creep (accelerating creep stages. The time-to-failure shows sample-specificity although all samples exhibit a similar creep process. All specimens exhibit a critical power-law behavior with an exponent of −0.51 ± 0.06, approximately equal to the theoretical value of −1/2. All samples have a long-term secondary stage characterized by a constant strain rate that dominates the lifetime of a sample. The average creep rate expressed by the total creep strain over the lifetime (tf-t0 for each specimen shows a power-law dependence on the secondary creep rate with an exponent of −1. This could provide a clue to the prediction of the time-to-failure of concrete, based on the monitoring of the creep behavior at the steady stage.

  18. Predicting Plant-Accessible Water in the Critical Zone: Mountain Ecosystems in a Mediterranean Climate

    Science.gov (United States)

    Klos, P. Z.; Goulden, M.; Riebe, C. S.; Tague, C.; O'Geen, A. T.; Flinchum, B. A.; Safeeq, M.; Conklin, M. H.; Hart, S. C.; Asefaw Berhe, A.; Hartsough, P. C.; Holbrook, S.; Bales, R. C.

    2017-12-01

    Enhanced understanding of subsurface water storage, and the below-ground architecture and processes that create it, will advance our ability to predict how the impacts of climate change - including drought, forest mortality, wildland fire, and strained water security - will take form in the decades to come. Previous research has examined the importance of plant-accessible water in soil, but in upland landscapes within Mediterranean climates the soil is often only the upper extent of subsurface water storage. We draw insights from both this previous research and a case study of the Southern Sierra Critical Zone Observatory to: define attributes of subsurface storage, review observed patterns in its distribution, highlight nested methods for its estimation across scales, and showcase the fundamental processes controlling its formation. We observe that forest ecosystems at our sites subsist on lasting plant-accessible stores of subsurface water during the summer dry period and during multi-year droughts. This indicates that trees in these forest ecosystems are rooted deeply in the weathered, highly porous saprolite, which reaches up to 10-20 m beneath the surface. This confirms the importance of large volumes of subsurface water in supporting ecosystem resistance to climate and landscape change across a range of spatiotemporal scales. This research enhances the ability to predict the extent of deep subsurface storage across landscapes; aiding in the advancement of both critical zone science and the management of natural resources emanating from similar mountain ecosystems worldwide.

  19. Modelación de episodios críticos de contaminación por material particulado (PM10 en Santiago de Chile: Comparación de la eficiencia predictiva de los modelos paramétricos y no paramétricos Modeling critical episodes of air pollution by PM10 in Santiago, Chile: Comparison of the predictive efficiency of parametric and non-parametric statistical models

    Directory of Open Access Journals (Sweden)

    Sergio A. Alvarado

    2010-12-01

    Full Text Available Objetivo: Evaluar la eficiencia predictiva de modelos estadísticos paramétricos y no paramétricos para predecir episodios críticos de contaminación por material particulado PM10 del día siguiente, que superen en Santiago de Chile la norma de calidad diaria. Una predicción adecuada de tales episodios permite a la autoridad decretar medidas restrictivas que aminoren la gravedad del episodio, y consecuentemente proteger la salud de la comunidad. Método: Se trabajó con las concentraciones de material particulado PM10 registradas en una estación asociada a la red de monitorización de la calidad del aire MACAM-2, considerando 152 observaciones diarias de 14 variables, y con información meteorológica registrada durante los años 2001 a 2004. Se ajustaron modelos estadísticos paramétricos Gamma usando el paquete estadístico STATA v11, y no paramétricos usando una demo del software estadístico MARS v 2.0 distribuida por Salford-Systems. Resultados: Ambos métodos de modelación presentan una alta correlación entre los valores observados y los predichos. Los modelos Gamma presentan mejores aciertos que MARS para las concentraciones de PM10 con valores Objective: To evaluate the predictive efficiency of two statistical models (one parametric and the other non-parametric to predict critical episodes of air pollution exceeding daily air quality standards in Santiago, Chile by using the next day PM10 maximum 24h value. Accurate prediction of such episodes would allow restrictive measures to be applied by health authorities to reduce their seriousness and protect the community´s health. Methods: We used the PM10 concentrations registered by a station of the Air Quality Monitoring Network (152 daily observations of 14 variables and meteorological information gathered from 2001 to 2004. To construct predictive models, we fitted a parametric Gamma model using STATA v11 software and a non-parametric MARS model by using a demo version of Salford

  20. Critical-ionization model for the dissolution of phenolic polymers in aqueous base

    Science.gov (United States)

    Flanagin, Lewis Wayne

    The microelectronics industry owes much of its success to the miniaturization of the integrated circuit. Advances in the design of photoresists used in microlithography have enabled this progress to continue. The majority of photoresist formulations contain phenolic polymers, and their imaging capabilities arise from the radiation- induced differences in dissolution rate between exposed and unexposed regions of the photoresist in basic aqueous solutions. The Critical-Ionization Model provides an understanding at the molecular level of the important factors in the aqueous dissolution of phenolic polymers below the entanglement molecular weight. The model postulates that a critical fraction of the acidic sites on a phenolic polymer must ionize for the polymer to dissolve in aqueous base. A functional relationship between the dissolution rate and the degree of ionization is developed based on this hypothesis. Quantitative predictions for the effects of polymer structure on the dissolution rate follow from equations relating the degree of ionization to the degree of polymerization, the polymer pKa, and the developer concentration. Experimental verification is' provided through tests of model predictions for the minimum base concentration required for development and the effects of polymer structure on the dissolution rate. Molecular simulations of resist dissolution based on the Critical-Ionization Model are used to probe the mechanism of surface inhibition and the evolution of edge roughness and surface roughness in photoresist profiles. These simulations demonstrate the dependence of the dissolution rate and surface roughness on the molecular-weight distribution of the polymer, degree of deprotection, void fraction, and developer concentration. Model parameters are evaluated using experimental data from turbidimetry, potentiometry, and copolymer studies. Means to extend the scope of the Critical-Ionization Model to describe and ultimately simulate other steps in the

  1. Critical groups vs. representative person: dose calculations due to predicted releases from USEXA

    International Nuclear Information System (INIS)

    Ferreira, N.L.D.; Rochedo, E.R.R.; Mazzilli, B.P.

    2013-01-01

    The critical group cf Centro Experimental Aramar (CEA) site was previously defined based 00 the effluents releases to the environment resulting from the facilities already operational at CEA. In this work, effective doses are calculated to members of the critical group considering the predicted potential uranium releases from the Uranium Hexafluoride Production Plant (USEXA). Basically, this work studies the behavior of the resulting doses related to the type of habit data used in the analysis and two distinct situations are considered: (a) the utilization of average values obtained from official institutions (IBGE, IEA-SP, CNEN, IAEA) and from the literature; and (b) the utilization of the 95 tb percentile of the values derived from distributions fit to the obtained habit data. The first option corresponds to the way that data was used for the definition of the critical group of CEA done in former assessments, while the second one corresponds to the use of data in deterministic assessments, as recommended by ICRP to estimate doses to the so--called 'representative person' . (author)

  2. Critical groups vs. representative person: dose calculations due to predicted releases from USEXA

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, N.L.D., E-mail: nelson.luiz@ctmsp.mar.mil.br [Centro Tecnologico da Marinha (CTM/SP), Sao Paulo, SP (Brazil); Rochedo, E.R.R., E-mail: elainerochedo@gmail.com [Instituto de Radiprotecao e Dosimetria (lRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Mazzilli, B.P., E-mail: mazzilli@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    The critical group cf Centro Experimental Aramar (CEA) site was previously defined based 00 the effluents releases to the environment resulting from the facilities already operational at CEA. In this work, effective doses are calculated to members of the critical group considering the predicted potential uranium releases from the Uranium Hexafluoride Production Plant (USEXA). Basically, this work studies the behavior of the resulting doses related to the type of habit data used in the analysis and two distinct situations are considered: (a) the utilization of average values obtained from official institutions (IBGE, IEA-SP, CNEN, IAEA) and from the literature; and (b) the utilization of the 95{sup tb} percentile of the values derived from distributions fit to the obtained habit data. The first option corresponds to the way that data was used for the definition of the critical group of CEA done in former assessments, while the second one corresponds to the use of data in deterministic assessments, as recommended by ICRP to estimate doses to the so--called 'representative person' . (author)

  3. Review on modeling and simulation of interdependent critical infrastructure systems

    International Nuclear Information System (INIS)

    Ouyang, Min

    2014-01-01

    Modern societies are becoming increasingly dependent on critical infrastructure systems (CISs) to provide essential services that support economic prosperity, governance, and quality of life. These systems are not alone but interdependent at multiple levels to enhance their overall performance. However, recent worldwide events such as the 9/11 terrorist attack, Gulf Coast hurricanes, the Chile and Japanese earthquakes, and even heat waves have highlighted that interdependencies among CISs increase the potential for cascading failures and amplify the impact of both large and small scale initial failures into events of catastrophic proportions. To better understand CISs to support planning, maintenance and emergency decision making, modeling and simulation of interdependencies across CISs has recently become a key field of study. This paper reviews the studies in the field and broadly groups the existing modeling and simulation approaches into six types: empirical approaches, agent based approaches, system dynamics based approaches, economic theory based approaches, network based approaches, and others. Different studies for each type of the approaches are categorized and reviewed in terms of fundamental principles, such as research focus, modeling rationale, and the analysis method, while different types of approaches are further compared according to several criteria, such as the notion of resilience. Finally, this paper offers future research directions and identifies critical challenges in the field. - Highlights: • Modeling approaches on interdependent critical infrastructure systems are reviewed. • I mainly review empirical, agent-based, system-dynamics, economic, network approaches. • Studies by each approach are sorted out in terms of fundamental principles. • Different approaches are further compared with resilience as the main criterion

  4. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  5. Computational multi-fluid dynamics predictions of critical heat flux in boiling flow

    International Nuclear Information System (INIS)

    Mimouni, S.; Baudry, C.; Guingo, M.; Lavieville, J.; Merigoux, N.; Mechitoua, N.

    2016-01-01

    Highlights: • A new mechanistic model dedicated to DNB has been implemented in the Neptune_CFD code. • The model has been validated against 150 tests. • Neptune_CFD code is a CFD tool dedicated to boiling flows. - Abstract: Extensive efforts have been made in the last five decades to evaluate the boiling heat transfer coefficient and the critical heat flux in particular. Boiling crisis remains a major limiting phenomenon for the analysis of operation and safety of both nuclear reactors and conventional thermal power systems. As a consequence, models dedicated to boiling flows have being improved. For example, Reynolds Stress Transport Model, polydispersion and two-phase flow wall law have been recently implemented. In a previous work, we have evaluated computational fluid dynamics results against single-phase liquid water tests equipped with a mixing vane and against two-phase boiling cases. The objective of this paper is to propose a new mechanistic model in a computational multi-fluid dynamics tool leading to wall temperature excursion and onset of boiling crisis. Critical heat flux is calculated against 150 tests and the mean relative error between calculations and experimental values is equal to 8.3%. The model tested covers a large physics scope in terms of mass flux, pressure, quality and channel diameter. Water and R12 refrigerant fluid are considered. Furthermore, it was found that the sensitivity to the grid refinement was acceptable.

  6. Computational multi-fluid dynamics predictions of critical heat flux in boiling flow

    Energy Technology Data Exchange (ETDEWEB)

    Mimouni, S., E-mail: stephane.mimouni@edf.fr; Baudry, C.; Guingo, M.; Lavieville, J.; Merigoux, N.; Mechitoua, N.

    2016-04-01

    Highlights: • A new mechanistic model dedicated to DNB has been implemented in the Neptune-CFD code. • The model has been validated against 150 tests. • Neptune-CFD code is a CFD tool dedicated to boiling flows. - Abstract: Extensive efforts have been made in the last five decades to evaluate the boiling heat transfer coefficient and the critical heat flux in particular. Boiling crisis remains a major limiting phenomenon for the analysis of operation and safety of both nuclear reactors and conventional thermal power systems. As a consequence, models dedicated to boiling flows have being improved. For example, Reynolds Stress Transport Model, polydispersion and two-phase flow wall law have been recently implemented. In a previous work, we have evaluated computational fluid dynamics results against single-phase liquid water tests equipped with a mixing vane and against two-phase boiling cases. The objective of this paper is to propose a new mechanistic model in a computational multi-fluid dynamics tool leading to wall temperature excursion and onset of boiling crisis. Critical heat flux is calculated against 150 tests and the mean relative error between calculations and experimental values is equal to 8.3%. The model tested covers a large physics scope in terms of mass flux, pressure, quality and channel diameter. Water and R12 refrigerant fluid are considered. Furthermore, it was found that the sensitivity to the grid refinement was acceptable.

  7. Predicting future glacial lakes in Austria using different modelling approaches

    Science.gov (United States)

    Otto, Jan-Christoph; Helfricht, Kay; Prasicek, Günther; Buckel, Johannes; Keuschnig, Markus

    2017-04-01

    Glacier retreat is one of the most apparent consequences of temperature rise in the 20th and 21th centuries in the European Alps. In Austria, more than 240 new lakes have formed in glacier forefields since the Little Ice Age. A similar signal is reported from many mountain areas worldwide. Glacial lakes can constitute important environmental and socio-economic impacts on high mountain systems including water resource management, sediment delivery, natural hazards, energy production and tourism. Their development significantly modifies the landscape configuration and visual appearance of high mountain areas. Knowledge on the location, number and extent of these future lakes can be used to assess potential impacts on high mountain geo-ecosystems and upland-lowland interactions. Information on new lakes is critical to appraise emerging threads and potentials for society. The recent development of regional ice thickness models and their combination with high resolution glacier surface data allows predicting the topography below current glaciers by subtracting ice thickness from glacier surface. Analyzing these modelled glacier bed surfaces reveals overdeepenings that represent potential locations for future lakes. In order to predict the location of future glacial lakes below recent glaciers in the Austrian Alps we apply different ice thickness models using high resolution terrain data and glacier outlines. The results are compared and validated with ice thickness data from geophysical surveys. Additionally, we run the models on three different glacier extents provided by the Austrian Glacier Inventories from 1969, 1998 and 2006. Results of this historical glacier extent modelling are compared to existing glacier lakes and discussed focusing on geomorphological impacts on lake evolution. We discuss model performance and observed differences in the results in order to assess the approach for a realistic prediction of future lake locations. The presentation delivers

  8. Integrating artificial neural networks and empirical correlations for the prediction of water-subcooled critical heat flux

    International Nuclear Information System (INIS)

    Mazzola, A.

    1997-01-01

    The critical heat flux (CHF) is an important parameter for the design of nuclear reactors, heat exchangers and other boiling heat transfer units. Recently, the CHF in water-subcooled flow boiling at high mass flux and subcooling has been thoroughly studied in relation to the cooling of high-heat-flux components in thermonuclear fusion reactors. Due to the specific thermal-hydraulic situation, very few of the existing correlations, originally developed for operating conditions typical of pressurized water reactors, are able to provide consistent predictions of water-subcooled-flow-boiling CHF at high heat fluxes. Therefore, alternative predicting techniques are being investigated. Among these, artificial neural networks (ANN) have the advantage of not requiring a formal model structure to fit the experimental data; however, their main drawbacks are the loss of model transparency ('black-box' character) and the lack of any indicator for evaluating accuracy and reliability of the ANN answer when 'never-seen' patterns are presented. In the present work, the prediction of CHF is approached by a hybrid system which couples a heuristic correlation with a neural network. The ANN role is to predict a datum-dependent parameter required by the analytical correlation; ; this parameter was instead set to a constant value obtained by usual best-fitting techniques when a pure analytical approach was adopted. Upper and lower boundaries can be possibly assigned to the parameter value, thus avoiding the case of unexpected and unpredictable answer failure. The present approach maintains the advantage of the analytical model analysis, and it partially overcomes the 'black-box' character typical of the straight application of ANNs because the neural network role is limited to the correlation tuning. The proposed methodology allows us to achieve accurate results and it is likely to be suitable for thermal-hydraulic and heat transfer data processing. (author)

  9. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  10. Mathematical models for indoor radon prediction

    International Nuclear Information System (INIS)

    Malanca, A.; Pessina, V.; Dallara, G.

    1995-01-01

    It is known that the indoor radon (Rn) concentration can be predicted by means of mathematical models. The simplest model relies on two variables only: the Rn source strength and the air exchange rate. In the Lawrence Berkeley Laboratory (LBL) model several environmental parameters are combined into a complex equation; besides, a correlation between the ventilation rate and the Rn entry rate from the soil is admitted. The measurements were carried out using activated carbon canisters. Seventy-five measurements of Rn concentrations were made inside two rooms placed on the second floor of a building block. One of the rooms had a single-glazed window whereas the other room had a double pane window. During three different experimental protocols, the mean Rn concentration was always higher into the room with a double-glazed window. That behavior can be accounted for by the simplest model. A further set of 450 Rn measurements was collected inside a ground-floor room with a grounding well in it. This trend maybe accounted for by the LBL model

  11. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time......). Five technical and economic aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  12. Setting the vision: applied patient-reported outcomes and smart, connected digital healthcare systems to improve patient-centered outcomes prediction in critical illness.

    Science.gov (United States)

    Wysham, Nicholas G; Abernethy, Amy P; Cox, Christopher E

    2014-10-01

    Prediction models in critical illness are generally limited to short-term mortality and uncommonly include patient-centered outcomes. Current outcome prediction tools are also insensitive to individual context or evolution in healthcare practice, potentially limiting their value over time. Improved prognostication of patient-centered outcomes in critical illness could enhance decision-making quality in the ICU. Patient-reported outcomes have emerged as precise methodological measures of patient-centered variables and have been successfully employed using diverse platforms and technologies, enhancing the value of research in critical illness survivorship and in direct patient care. The learning health system is an emerging ideal characterized by integration of multiple data sources into a smart and interconnected health information technology infrastructure with the goal of rapidly optimizing patient care. We propose a vision of a smart, interconnected learning health system with integrated electronic patient-reported outcomes to optimize patient-centered care, including critical care outcome prediction. A learning health system infrastructure integrating electronic patient-reported outcomes may aid in the management of critical illness-associated conditions and yield tools to improve prognostication of patient-centered outcomes in critical illness.

  13. A model to predict stream water temperature across the conterminous USA

    Science.gov (United States)

    Catalina Segura; Peter Caldwell; Ge Sun; Steve McNulty; Yang Zhang

    2014-01-01

    Stream water temperature (ts) is a critical water quality parameter for aquatic ecosystems. However, ts records are sparse or nonexistent in many river systems. In this work, we present an empirical model to predict ts at the site scale across the USA. The model, derived using data from 171 reference sites selected from the Geospatial Attributes of Gages for Evaluating...

  14. New model for burnout prediction in channels of various cross-section

    Energy Technology Data Exchange (ETDEWEB)

    Bobkov, V.P.; Kozina, N.V.; Vinogrado, V.N.; Zyatnina, O.A. [Institute of Physics and Power Engineering, Kaluga (Russian Federation)

    1995-09-01

    The model developed to predict a critical heat flux (CHF) in various channels is presented together with the results of data analysis. A model is the realization of relative method of CHF describing based on the data for round tube and on the system of correction factors. The results of data description presented here are for rectangular and triangular channels, annuli and rod bundles.

  15. An Operational Model for the Prediction of Jet Blast

    Science.gov (United States)

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  16. Critical behavior of the two-dimensional icosahedron model

    Science.gov (United States)

    Ueda, Hiroshi; Okunishi, Kouichi; Krčmár, Roman; Gendiar, Andrej; Yunoki, Seiji; Nishino, Tomotoshi

    2017-12-01

    In the context of a discrete analog of the classical Heisenberg model, we investigate the critical behavior of the icosahedron model, where the interaction energy is defined as the inner product of neighboring vector spins of unit length pointing to the vertices of the icosahedron. The effective correlation length and magnetization of the model are calculated by means of the corner-transfer-matrix renormalization group (CTMRG) method. A scaling analysis with respect to the cutoff dimension m in CTMRG reveals a second-order phase transition characterized by the exponents ν =1.62 ±0.02 and β =0.12 ±0.01 . We also extract the central charge from the classical analog of entanglement entropy as c =1.90 ±0.02 , which cannot be explained by the minimal series of conformal field theory.

  17. Predicting fatigue and psychophysiological test performance from speech for safety critical environments

    Directory of Open Access Journals (Sweden)

    Khan Richard Baykaner

    2015-08-01

    Full Text Available Automatic systems for estimating operator fatigue have application in safety-critical environments. A system which could estimate level of fatigue from speech would have application in domains where operators engage in regular verbal communication as part of their duties. Previous studies on the prediction of fatigue from speech have been limited because of their reliance on subjective ratings and because they lack comparison to other methods for assessing fatigue. In this paper we present an analysis of voice recordings and psychophysiological test scores collected from seven aerospace personnel during a training task in which they remained awake for 60 hours. We show that voice features and test scores are affected by both the total time spent awake and the time position within each subject’s circadian cycle. However, we show that time spent awake and time of day information are poor predictors of the test results; while voice features can give good predictions of the psychophysiological test scores and sleep latency. Mean absolute errors of prediction are possible within about 17.5% for sleep latency and 5-12% for test scores. We discuss the implications for the use of voice as a means to monitor the effects of fatigue on cognitive performance in practical applications.

  18. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  19. Reliable critical sized defect rodent model for cleft palate research.

    Science.gov (United States)

    Mostafa, Nesrine Z; Doschak, Michael R; Major, Paul W; Talwar, Reena

    2014-12-01

    Suitable animal models are necessary to test the efficacy of new bone grafting therapies in cleft palate surgery. Rodent models of cleft palate are available but have limitations. This study compared and modified mid-palate cleft (MPC) and alveolar cleft (AC) models to determine the most reliable and reproducible model for bone grafting studies. Published MPC model (9 × 5 × 3 mm(3)) lacked sufficient information for tested rats. Our initial studies utilizing AC model (7 × 4 × 3 mm(3)) in 8 and 16 weeks old Sprague Dawley (SD) rats revealed injury to adjacent structures. After comparing anteroposterior and transverse maxillary dimensions in 16 weeks old SD and Wistar rats, virtual planning was performed to modify MPC and AC defects dimensions, taking the adjacent structures into consideration. Modified MPC (7 × 2.5 × 1 mm(3)) and AC (5 × 2.5 × 1 mm(3)) defects were employed in 16 weeks old Wistar rats and healing was monitored by micro-computed tomography and histology. Maxillary dimensions in SD and Wistar rats were not significantly different. Preoperative virtual planning enhanced postoperative surgical outcomes. Bone healing occurred at defect margin leaving central bone void confirming the critical size nature of the modified MPC and AC defects. Presented modifications for MPC and AC models created clinically relevant and reproducible defects. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  20. Self-Organized Criticality in an Anisotropic Earthquake Model

    Science.gov (United States)

    Li, Bin-Quan; Wang, Sheng-Jun

    2018-03-01

    We have made an extensive numerical study of a modified model proposed by Olami, Feder, and Christensen to describe earthquake behavior. Two situations were considered in this paper. One situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly not averagely and keeps itself to zero. The other situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly and keeps some energy for itself instead of reset to zero. Different boundary conditions were considered as well. By analyzing the distribution of earthquake sizes, we found that self-organized criticality can be excited only in the conservative case or the approximate conservative case in the above situations. Some evidence indicated that the critical exponent of both above situations and the original OFC model tend to the same result in the conservative case. The only difference is that the avalanche size in the original model is bigger. This result may be closer to the real world, after all, every crust plate size is different. Supported by National Natural Science Foundation of China under Grant Nos. 11675096 and 11305098, the Fundamental Research Funds for the Central Universities under Grant No. GK201702001, FPALAB-SNNU under Grant No. 16QNGG007, and Interdisciplinary Incubation Project of SNU under Grant No. 5

  1. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...... model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model...

  2. Curing critical links in oscillator networks as power flow models

    International Nuclear Information System (INIS)

    Rohden, Martin; Meyer-Ortmanns, Hildegard; Witthaut, Dirk; Timme, Marc

    2017-01-01

    Modern societies crucially depend on the robust supply with electric energy so that blackouts of power grids can have far reaching consequences. Typically, large scale blackouts take place after a cascade of failures: the failure of a single infrastructure component, such as a critical transmission line, results in several subsequent failures that spread across large parts of the network. Improving the robustness of a network to prevent such secondary failures is thus key for assuring a reliable power supply. In this article we analyze the nonlocal rerouting of power flows after transmission line failures for a simplified AC power grid model and compare different strategies to improve network robustness. We identify critical links in the grid and compute alternative pathways to quantify the grid’s redundant capacity and to find bottlenecks along the pathways. Different strategies are developed and tested to increase transmission capacities to restore stability with respect to transmission line failures. We show that local and nonlocal strategies typically perform alike: one can equally well cure critical links by providing backup capacities locally or by extending the capacities of bottleneck links at remote locations. (paper)

  3. Monte Carlo method for critical systems in infinite volume: The planar Ising model.

    Science.gov (United States)

    Herdeiro, Victor; Doyon, Benjamin

    2016-10-01

    In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three-, and four-point of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.

  4. Self-organised criticality in the evolution of a thermodynamic model of rodent thermoregulatory huddling.

    Directory of Open Access Journals (Sweden)

    Stuart P Wilson

    2017-01-01

    Full Text Available A thermodynamic model of thermoregulatory huddling interactions between endotherms is developed. The model is presented as a Monte Carlo algorithm in which animals are iteratively exchanged between groups, with a probability of exchanging groups defined in terms of the temperature of the environment and the body temperatures of the animals. The temperature-dependent exchange of animals between groups is shown to reproduce a second-order critical phase transition, i.e., a smooth switch to huddling when the environment gets colder, as measured in recent experiments. A peak in the rate at which group sizes change, referred to as pup flow, is predicted at the critical temperature of the phase transition, consistent with a thermodynamic description of huddling, and with a description of the huddle as a self-organising system. The model was subjected to a simple evolutionary procedure, by iteratively substituting the physiologies of individuals that fail to balance the costs of thermoregulation (by huddling in groups with the costs of thermogenesis (by contributing heat. The resulting tension between cooperative and competitive interactions was found to generate a phenomenon called self-organised criticality, as evidenced by the emergence of avalanches in fitness that propagate across many generations. The emergence of avalanches reveals how huddling can introduce correlations in fitness between individuals and thereby constrain evolutionary dynamics. Finally, a full agent-based model of huddling interactions is also shown to generate criticality when subjected to the same evolutionary pressures. The agent-based model is related to the Monte Carlo model in the way that a Vicsek model is related to an Ising model in statistical physics. Huddling therefore presents an opportunity to use thermodynamic theory to study an emergent adaptive animal behaviour. In more general terms, huddling is proposed as an ideal system for investigating the interaction

  5. Predictive modeling: potential application in prevention services.

    Science.gov (United States)

    Wilson, Moira L; Tumen, Sarah; Ota, Rissa; Simmers, Anthony G

    2015-05-01

    In 2012, the New Zealand Government announced a proposal to introduce predictive risk models (PRMs) to help professionals identify and assess children at risk of abuse or neglect as part of a preventive early intervention strategy, subject to further feasibility study and trialing. The purpose of this study is to examine technical feasibility and predictive validity of the proposal, focusing on a PRM that would draw on population-wide linked administrative data to identify newborn children who are at high priority for intensive preventive services. Data analysis was conducted in 2013 based on data collected in 2000-2012. A PRM was developed using data for children born in 2010 and externally validated for children born in 2007, examining outcomes to age 5 years. Performance of the PRM in predicting administratively recorded substantiations of maltreatment was good compared to the performance of other tools reviewed in the literature, both overall, and for indigenous Māori children. Some, but not all, of the children who go on to have recorded substantiations of maltreatment could be identified early using PRMs. PRMs should be considered as a potential complement to, rather than a replacement for, professional judgment. Trials are needed to establish whether risks can be mitigated and PRMs can make a positive contribution to frontline practice, engagement in preventive services, and outcomes for children. Deciding whether to proceed to trial requires balancing a range of considerations, including ethical and privacy risks and the risk of compounding surveillance bias. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  6. Critical Flicker Fusion Predicts Executive Function in Younger and Older Adults.

    Science.gov (United States)

    Mewborn, Catherine; Renzi, Lisa M; Hammond, Billy R; Miller, L Stephen

    2015-11-01

    Critical flicker fusion (CFF), a measure of visual processing speed, has often been regarded as a basic metric underlying a number of higher cognitive functions. To test this, we measured CFF, global cognition, and several cognitive subdomains. Because age is a strong covariate for most of these variables, both younger (n = 72) and older (n = 57) subjects were measured. Consistent with expectations, age was inversely related to CFF and performance on all of the cognitive measures except for visual memory. In contrast, age-adjusted CFF thresholds were only positively related to executive function. Results showed that CFF predicted executive function across both age groups and accounted for unique variance in performance above and beyond age and global cognitive status. The current findings suggest that CFF may be a unique predictor of executive dysfunction. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Importance of critical micellar concentration for the prediction of solubility enhancement in biorelevant media.

    Science.gov (United States)

    Ottaviani, G; Wendelspiess, S; Alvarez-Sánchez, R

    2015-04-06

    This study evaluated if the intrinsic surface properties of compounds are related to the solubility enhancement (SE) typically observed in biorelevant media like fasted state simulated intestinal fluids (FaSSIF). The solubility of 51 chemically diverse compounds was measured in FaSSIF and in phosphate buffer and the surface activity parameters were determined. This study showed that the compound critical micellar concentration parameter (CMC) correlates strongly with the solubility enhancement (SE) observed in FaSSIF compared to phosphate buffer. Thus, the intrinsic capacity of molecules to form micelles is also a determinant for each compound's affinity to the micelles of biorelevant surfactants. CMC correlated better with SE than lipophilicity (logD), especially over the logD range typically covered by drugs (2 < logD < 4). CMC can become useful to guide drug discovery scientists to better diagnose, improve, and predict solubility in biorelevant media, thereby enhancing oral bioavailability of drug candidates.

  8. Critical rotation of general-relativistic polytropic models revisited

    Science.gov (United States)

    Geroyannis, V.; Karageorgopoulos, V.

    2013-09-01

    We develop a perturbation method for computing the critical rotational parameter as a function of the equatorial radius of a rigidly rotating polytropic model in the "post-Newtonia approximation" (PNA). We treat our models as "initial value problems" (IVP) of ordinary differential equations in the complex plane. The computations are carried out by the code dcrkf54.f95 (Geroyannis and Valvi 2012 [P1]; modified Runge-Kutta-Fehlberg code of fourth and fifth order for solving initial value problems in the complex plane). Such a complex-plane treatment removes the syndromes appearing in this particular family of IVPs (see e.g. P1, Sec. 3) and allows continuation of the numerical integrations beyond the surface of the star. Thus all the required values of the Lane-Emden function(s) in the post-Newtonian approximation are calculated by interpolation (so avoiding any extrapolation). An interesting point is that, in our computations, we take into account the complete correction due to the gravitational term, and this issue is a remarkable difference compared to the classical PNA. We solve the generalized density as a function of the equatorial radius and find the critical rotational parameter. Our computations are extended to certain other physical characteristics (like mass, angular momentum, rotational kinetic energy, etc). We find that our method yields results comparable with those of other reliable methods. REFERENCE: V.S. Geroyannis and F.N. Valvi 2012, International Journal of Modern Physics C, 23, No 5, 1250038:1-15.

  9. Modeling financial markets by self-organized criticality

    Science.gov (United States)

    Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea

    2015-10-01

    We present a financial market model, characterized by self-organized criticality, that is able to generate endogenously a realistic price dynamics and to reproduce well-known stylized facts. We consider a community of heterogeneous traders, composed by chartists and fundamentalists, and focus on the role of informative pressure on market participants, showing how the spreading of information, based on a realistic imitative behavior, drives contagion and causes market fragility. In this model imitation is not intended as a change in the agent's group of origin, but is referred only to the price formation process. We introduce in the community also a variable number of random traders in order to study their possible beneficial role in stabilizing the market, as found in other studies. Finally, we also suggest some counterintuitive policy strategies able to dampen fluctuations by means of a partial reduction of information.

  10. Self-Organized Criticality Theory Model of Thermal Sandpile

    International Nuclear Information System (INIS)

    Peng Xiao-Dong; Qu Hong-Peng; Xu Jian-Qiang; Han Zui-Jiao

    2015-01-01

    A self-organized criticality model of a thermal sandpile is formulated for the first time to simulate the dynamic process with interaction between avalanche events on the fast time scale and diffusive transports on the slow time scale. The main characteristics of the model are that both particle and energy avalanches of sand grains are considered simultaneously. Properties of intermittent transport and improved confinement are analyzed in detail. The results imply that the intermittent phenomenon such as blobs in the low confinement mode as well as edge localized modes in the high confinement mode observed in tokamak experiments are not only determined by the edge plasma physics, but also affected by the core plasma dynamics. (paper)

  11. Heuristic Modeling for TRMM Lifetime Predictions

    Science.gov (United States)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  12. A Computational Model for Predicting Gas Breakdown

    Science.gov (United States)

    Gill, Zachary

    2017-10-01

    Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.

  13. Infarct volume predicts critical care needs in stroke patients treated with intravenous thrombolysis

    Energy Technology Data Exchange (ETDEWEB)

    Faigle, Roland; Marsh, Elisabeth B.; Llinas, Rafael H.; Urrutia, Victor C. [Johns Hopkins University School of Medicine, Department of Neurology, Baltimore, MD (United States); Wozniak, Amy W. [Johns Hopkins University, Department of Biostatistics, Bloomberg School of Public Health, Baltimore, MD (United States)

    2014-10-26

    Patients receiving intravenous thrombolysis with recombinant tissue plasminogen activator (IVT) for ischemic stroke are monitored in an intensive care unit (ICU) or a comparable unit capable of ICU interventions due to the high frequency of standardized neurological exams and vital sign checks. The present study evaluates quantitative infarct volume on early post-IVT MRI as a predictor of critical care needs and aims to identify patients who may not require resource intense monitoring. We identified 46 patients who underwent MRI within 6 h of IVT. Infarct volume was measured using semiautomated software. Logistic regression and receiver operating characteristics (ROC) analysis were used to determine factors associated with ICU needs. Infarct volume was an independent predictor of ICU need after adjusting for age, sex, race, systolic blood pressure, NIH Stroke Scale (NIHSS), and coronary artery disease (odds ratio 1.031 per cm{sup 3} increase in volume, 95 % confidence interval [CI] 1.004-1.058, p = 0.024). The ROC curve with infarct volume alone achieved an area under the curve (AUC) of 0.766 (95 % CI 0.605-0.927), while the AUC was 0.906 (95 % CI 0.814-0.998) after adjusting for race, systolic blood pressure, and NIHSS. Maximum Youden index calculations identified an optimal infarct volume cut point of 6.8 cm{sup 3} (sensitivity 75.0 %, specificity 76.7 %). Infarct volume greater than 3 cm{sup 3} predicted need for critical care interventions with 81.3 % sensitivity and 66.7 % specificity. Infarct volume may predict needs for ICU monitoring and interventions in stroke patients treated with IVT. (orig.)

  14. Hierarchical, model-based risk management of critical infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Baiardi, F. [Polo G.Marconi La Spezia, Universita di Pisa, Pisa (Italy); Dipartimento di Informatica, Universita di Pisa, L.go B.Pontecorvo 3 56127, Pisa (Italy)], E-mail: f.baiardi@unipi.it; Telmon, C.; Sgandurra, D. [Dipartimento di Informatica, Universita di Pisa, L.go B.Pontecorvo 3 56127, Pisa (Italy)

    2009-09-15

    Risk management is a process that includes several steps, from vulnerability analysis to the formulation of a risk mitigation plan that selects countermeasures to be adopted. With reference to an information infrastructure, we present a risk management strategy that considers a sequence of hierarchical models, each describing dependencies among infrastructure components. A dependency exists anytime a security-related attribute of a component depends upon the attributes of other components. We discuss how this notion supports the formal definition of risk mitigation plan and the evaluation of the infrastructure robustness. A hierarchical relation exists among models that are analyzed because each model increases the level of details of some components in a previous one. Since components and dependencies are modeled through a hypergraph, to increase the model detail level, some hypergraph nodes are replaced by more and more detailed hypergraphs. We show how critical information for the assessment can be automatically deduced from the hypergraph and define conditions that determine cases where a hierarchical decomposition simplifies the assessment. In these cases, the assessment has to analyze the hypergraph that replaces the component rather than applying again all the analyses to a more detailed, and hence larger, hypergraph. We also show how the proposed framework supports the definition of a risk mitigation plan and discuss some indicators of the overall infrastructure robustness. Lastly, the development of tools to support the assessment is discussed.

  15. Hierarchical, model-based risk management of critical infrastructures

    International Nuclear Information System (INIS)

    Baiardi, F.; Telmon, C.; Sgandurra, D.

    2009-01-01

    Risk management is a process that includes several steps, from vulnerability analysis to the formulation of a risk mitigation plan that selects countermeasures to be adopted. With reference to an information infrastructure, we present a risk management strategy that considers a sequence of hierarchical models, each describing dependencies among infrastructure components. A dependency exists anytime a security-related attribute of a component depends upon the attributes of other components. We discuss how this notion supports the formal definition of risk mitigation plan and the evaluation of the infrastructure robustness. A hierarchical relation exists among models that are analyzed because each model increases the level of details of some components in a previous one. Since components and dependencies are modeled through a hypergraph, to increase the model detail level, some hypergraph nodes are replaced by more and more detailed hypergraphs. We show how critical information for the assessment can be automatically deduced from the hypergraph and define conditions that determine cases where a hierarchical decomposition simplifies the assessment. In these cases, the assessment has to analyze the hypergraph that replaces the component rather than applying again all the analyses to a more detailed, and hence larger, hypergraph. We also show how the proposed framework supports the definition of a risk mitigation plan and discuss some indicators of the overall infrastructure robustness. Lastly, the development of tools to support the assessment is discussed.

  16. Which method predicts recidivism best?: A comparison of statistical, machine learning, and data mining predictive models

    OpenAIRE

    Tollenaar, N.; van der Heijden, P.G.M.

    2012-01-01

    Using criminal population conviction histories of recent offenders, prediction mod els are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining and machine learning provide an improvement in predictive performance over classical statistical methods, namely logistic regression and linear discrim inant analysis. These models are compared ...

  17. Predictive formulation of the Nambu Jona-Lasinio model

    Science.gov (United States)

    Battistel, O. A.; Dallabona, G.; Krein, G.

    2008-03-01

    A novel strategy to handle divergences typical of perturbative calculations is implemented for the Nambu Jona-Lasinio model and its phenomenological consequences investigated. The central idea of the method is to avoid the critical step involved in the regularization process, namely, the explicit evaluation of divergent integrals. This goal is achieved by assuming a regularization distribution in an implicit way and making use, in intermediary steps, only of very general properties of such regularization. The finite parts are separated from the divergent ones and integrated free from effects of the regularization. The divergent parts are organized in terms of standard objects, which are independent of the (arbitrary) momenta running in internal lines of loop graphs. Through the analysis of symmetry relations, a set of properties for the divergent objects are identified, which we denominate consistency relations, reducing the number of divergent objects to only a few. The calculational strategy eliminates unphysical dependencies of the arbitrary choices for the routing of internal momenta, leading to ambiguity-free, and symmetry-preserving physical amplitudes. We show that the imposition of scale properties for the basic divergent objects leads to a critical condition for the constituent quark mass such that the remaining arbitrariness is removed. The model becomes predictive in the sense that its phenomenological consequences do not depend on possible choices made in intermediary steps. Numerical results are obtained for physical quantities at the one-loop level for the pion and sigma masses and pion-quark and sigma-quark coupling constants.

  18. Fuzzy predictive filtering in nonlinear economic model predictive control for demand response

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.

    2016-01-01

    The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...

  19. A Fisher’s Criterion-Based Linear Discriminant Analysis for Predicting the Critical Values of Coal and Gas Outbursts Using the Initial Gas Flow in a Borehole

    Directory of Open Access Journals (Sweden)

    Xiaowei Li

    2017-01-01

    Full Text Available The risk of coal and gas outbursts can be predicted using a method that is linear and continuous and based on the initial gas flow in the borehole (IGFB; this method is significantly superior to the traditional point prediction method. Acquiring accurate critical values is the key to ensuring accurate predictions. Based on ideal rock cross-cut coal uncovering model, the IGFB measurement device was developed. The present study measured the data of the initial gas flow over 3 min in a 1 m long borehole with a diameter of 42 mm in the laboratory. A total of 48 sets of data were obtained. These data were fuzzy and chaotic. Fisher’s discrimination method was able to transform these spatial data, which were multidimensional due to the factors influencing the IGFB, into a one-dimensional function and determine its critical value. Then, by processing the data into a normal distribution, the critical values of the outbursts were analyzed using linear discriminant analysis with Fisher’s criterion. The weak and strong outbursts had critical values of 36.63 L and 80.85 L, respectively, and the accuracy of the back-discriminant analysis for the weak and strong outbursts was 94.74% and 92.86%, respectively. Eight outburst tests were simulated in the laboratory, the reverse verification accuracy was 100%, and the accuracy of the critical value was verified.

  20. Circulating MicroRNA-223 Serum Levels Do Not Predict Sepsis or Survival in Patients with Critical Illness

    Directory of Open Access Journals (Sweden)

    Fabian Benz

    2015-01-01

    Full Text Available Background and Aims. Dysregulation of miR-223 was recently linked to various diseases associated with systemic inflammatory responses such as type 2 diabetes, cancer, and bacterial infections. However, contradictory results are available on potential alterations of miR-223 serum levels during sepsis. We thus aimed to evaluate the diagnostic and prognostic value of miR-223 serum concentrations in patients with critical illness and sepsis. Methods. We used i.v. injection of lipopolysaccharide (LPS as well as cecal pole ligation and puncture (CLP for induction of polymicrobial sepsis in mice and measured alterations in serum levels of miR-223. These results from mice were translated into a large and well-characterized cohort of critically ill patients admitted to the medical intensive care unit (ICU. Finally, results from analysis in patients were correlated with clinical data and extensive sets of routine and experimental biomarkers. Results. Although LPS injection induced moderately elevated serum miR-223 levels in mice, no significant alterations in miR-223 serum levels were found in mice after CLP-induced sepsis. In accordance with these results from animal models, serum miR-223 levels did not differ between critically ill patients and healthy controls. However, ICU patients with more severe disease (APACHE-II score showed moderately reduced circulating miR-223. Strikingly, no differences in miR-223 levels were found in critically ill patients with or without sepsis, and serum levels of miR-223 did not correlate with classical markers of inflammation or bacterial infection. Finally, low miR-223 serum levels were moderately associated with an unfavorable prognosis of patients during the ICU treatment but did not predict long-term mortality. Conclusion. Recent reports on alterations in miR-223 serum levels during sepsis revealed contradictory results, preventing a potential use of this miRNA in clinical routine. We clearly show that miR-223 serum

  1. The critical power function is dependent on the duration of the predictive exercise tests chosen.

    Science.gov (United States)

    Bishop, D; Jenkins, D G; Howard, A

    1998-02-01

    The linear relationship between work accomplished (W(lim)) and time to exhaustion (t(lim)) can be described by the equation: W(lim) = a + CP x t(lim). Critical power (CP) is the slope of this line and is thought to represent a maximum rate of ATP synthesis without exhaustion, presumably an inherent characteristic of the aerobic energy system. The present investigation determined whether the choice of predictive tests would elicit significant differences in the estimated CP. Ten female physical education students completed, in random order and on consecutive days, five all-out predictive tests at preselected constant-power outputs. Predictive tests were performed on an electrically-braked cycle ergometer and power loadings were individually chosen so as to induce fatigue within approximately 1-10 mins. CP was derived by fitting the linear W(lim)-t(lim) regression and calculated three ways: 1) using the first, third and fifth W(lim)-t(lim) coordinates (I135), 2) using coordinates from the three highest power outputs (I123; mean t(lim) = 68-193 s) and 3) using coordinates from the lowest power outputs (I345; mean t(lim) = 193-485 s). Repeated measures ANOVA revealed that CPI123 (201.0+/-37.9W) > CPI135 (176.1+/-27.6W) > CPI345 (164.0+/-22.8W) (P<0.05). When the three sets of data were used to fit the hyperbolic Power-t(lim) regression, statistically significant differences between each CP were also found (P<0.05). The shorter the predictive trials, the greater the slope of the W(lim)-t(lim) regression; possibly because of the greater influence of 'aerobic inertia' on these trials. This may explain why CP has failed to represent a maximal, sustainable work rate. The present findings suggest that if CP is to represent the highest power output that an individual can maintain "for a very long time without fatigue" then CP should be calculated over a range of predictive tests in which the influence of aerobic inertia is minimised.

  2. Same admissions tools, different outcomes: a critical perspective on predictive validity in three undergraduate medical schools.

    Science.gov (United States)

    Edwards, Daniel; Friedman, Tim; Pearce, Jacob

    2013-12-27

    Admission to medical school is one of the most highly competitive entry points in higher education. Considerable investment is made by universities to develop selection processes that aim to identify the most appropriate candidates for their medical programs. This paper explores data from three undergraduate medical schools to offer a critical perspective of predictive validity in medical admissions. This study examined 650 undergraduate medical students from three Australian universities as they progressed through the initial years of medical school (accounting for approximately 25 per cent of all commencing undergraduate medical students in Australia in 2006 and 2007). Admissions criteria (aptitude test score based on UMAT, school result and interview score) were correlated with GPA over four years of study. Standard regression of each of the three admissions variables on GPA, for each institution at each year level was also conducted. Overall, the data found positive correlations between performance in medical school, school achievement and UMAT, but not interview. However, there were substantial differences between schools, across year levels, and within sections of UMAT exposed. Despite this, each admission variable was shown to add towards explaining course performance, net of other variables. The findings suggest the strength of multiple admissions tools in predicting outcomes of medical students. However, they also highlight the large differences in outcomes achieved by different schools, thus emphasising the pitfalls of generalising results from predictive validity studies without recognising the diverse ways in which they are designed and the variation in the institutional contexts in which they are administered. The assumption that high-positive correlations are desirable (or even expected) in these studies is also problematised.

  3. Uncertainties in modelling and scaling of critical flows and pump model in TRAC-PF1/MOD1

    International Nuclear Information System (INIS)

    Rohatgi, U.S.; Yu, Wen-Shi.

    1987-01-01

    The USNRC has established a Code Scalability, Applicability and Uncertainty (CSAU) evaluation methodology to quantify the uncertainty in the prediction of safety parameters by the best estimate codes. These codes can then be applied to evaluate the Emergency Core Cooling System (ECCS). The TRAC-PF1/MOD1 version was selected as the first code to undergo the CSAU analysis for LBLOCA applications. It was established through this methodology that break flow and pump models are among the top ranked models in the code affecting the peak clad temperature (PCT) prediction for LBLOCA. The break flow model bias or discrepancy and the uncertainty were determined by modelling the test section near the break for 12 Marviken tests. It was observed that the TRAC-PF1/MOD1 code consistently underpredicts the break flow rate and that the prediction improved with increasing pipe length (larger L/D). This is true for both subcooled and two-phase critical flows. A pump model was developed from Westinghouse (1/3 scale) data. The data represent the largest available test pump relevant to Westinghouse PWRs. It was then shown through the analysis of CE and CREARE pump data that larger pumps degrade less and also that pumps degrade less at higher pressures. Since the model developed here is based on the 1/3 scale pump and on low pressure data, it is conservative and will overpredict the degradation when applied to PWRs

  4. Mortality risk prediction models for coronary artery bypass graft surgery: current scenario and future direction.

    Science.gov (United States)

    Karim, Mohammed N; Reid, Christopher M; Cochrane, Andrew; Tran, Lavinia; Alramadan, Mohammed; Hossain, Mohammed N; Billah, Baki

    2017-12-01

    Many risk prediction models are currently in use for predicting short-term mortality following coronary artery bypass graft (CABG) surgery. This review critically appraised the methods that were used for developing these models to assess their applicability in current practice setting as well as for the necessity of up-gradation. Medline via Ovid was searched for articles published between 1946 and 2016 and EMBASE via Ovid between 1974 and 2016 to identify risk prediction models for CABG. Article selection and data extraction was conducted using the CHARMS checklist for review of prediction model studies. Association between model development methods and model's discrimination was assessed using Kruskal-Wallis one-way analysis of variance and Mann-Whitney U-test. A total of 53 risk prediction models for short-term mortality following CABG were identified. The review found a wide variation in development methodology of risk prediction models in the field. Ambiguous predictor and outcome definition, sub-optimum sample size, inappropriate handling of missing data and inefficient predictor selection technique are major issues identified in the review. Quantitative synthesis in the review showed "missing value imputation" and "adopting machine learning algorithms" may result in better discrimination power of the models. There are aspects in current risk modeling, where there is room for improvement to reflect current clinical practice. Future risk modelling needs to adopt a standardized approach to defining both outcome and predictor variables, rational treatment of missing data and robust statistical techniques to enhance performance of the mortality risk prediction.

  5. A review of logistic regression models used to predict post-fire tree mortality of western North American conifers

    Science.gov (United States)

    Travis Woolley; David C. Shaw; Lisa M. Ganio; Stephen. Fitzgerald

    2012-01-01

    Logistic regression models used to predict tree mortality are critical to post-fire management, planning prescribed bums and understanding disturbance ecology. We review literature concerning post-fire mortality prediction using logistic regression models for coniferous tree species in the western USA. We include synthesis and review of: methods to develop, evaluate...

  6. Critical behavior of the Ising model on random fractals.

    Science.gov (United States)

    Monceau, Pascal

    2011-11-01

    We study the critical behavior of the Ising model in the case of quenched disorder constrained by fractality on random Sierpinski fractals with a Hausdorff dimension d(f) is approximately equal to 1.8928. This is a first attempt to study a situation between the borderline cases of deterministic self-similarity and quenched randomness. Intensive Monte Carlo simulations were carried out. Scaling corrections are much weaker than in the deterministic cases, so that our results enable us to ensure that finite-size scaling holds, and that the critical behavior is described by a new universality class. The hyperscaling relation is compatible with an effective dimension equal to the Hausdorff one; moreover the two eigenvalues exponents of the renormalization flows are shown to be different from the ones calculated from ε expansions, and from the ones obtained for fourfold symmetric deterministic fractals. Although the space dimensionality is not integer, lack of self-averaging properties exhibits some features very close to the ones of a random fixed point associated with a relevant disorder.

  7. Universality away from critical points in a thermostatistical model

    Science.gov (United States)

    Lapilli, C. M.; Wexler, C.; Pfeifer, P.

    Nature uses phase transitions as powerful regulators of processes ranging from climate to the alteration of phase behavior of cell membranes to protect cells from cold, building on the fact that thermodynamic properties of a solid, liquid, or gas are sensitive fingerprints of intermolecular interactions. The only known exceptions from this sensitivity are critical points. At a critical point, two phases become indistinguishable and thermodynamic properties exhibit universal behavior: systems with widely different intermolecular interactions behave identically. Here we report a major counterexample. We show that different members of a family of two-dimensional systems —the discrete p-state clock model— with different Hamiltonians describing different microscopic interactions between molecules or spins, may exhibit identical thermodynamic behavior over a wide range of temperatures. The results generate a comprehensive map of the phase diagram of the model and, by virtue of the discrete rotors behaving like continuous rotors, an emergent symmetry, not present in the Hamiltonian. This symmetry, or many-to-one map of intermolecular interactions onto thermodynamic states, demonstrates previously unknown limits for macroscopic distinguishability of different microscopic interactions.

  8. Thermodynamic quantum critical behavior of the anisotropic Kondo necklace model

    Energy Technology Data Exchange (ETDEWEB)

    Reyes, D. [Centro Brasileiro de Pesquisas Fisicas, Rua Dr. Xavier Sigaud, 150-Urca, 22290-180 RJ (Brazil)], E-mail: daniel@cbpf.br; Continentino, M.A. [Instituto de Fisica, Universidade Federal Fluminense, Campus da Praia Vermelha, Niteroi, RJ 24.210-340 (Brazil); Wang Hanting [Beijing National Laboratory of Condensed Matter Physics and Institute of Physics, Chinese Academy of Sciences, Beijing 100080 (China)

    2009-03-15

    The Ising-like anisotropy parameter {delta} in the Kondo necklace model is analyzed using the bond-operator method at zero and finite temperatures for arbitrary d dimensions. A decoupling scheme on the double time Green's functions is used to find the dispersion relation for the excitations of the system. At zero temperature and in the paramagnetic side of the phase diagram, we determine the spin gap exponent {nu}z{approx}0.5 in three dimensions and anisotropy between 0{<=}{delta}{<=}1, a result consistent with the dynamic exponent z=1 for the Gaussian character of the bond-operator treatment. On the other hand, in the antiferromagnetic phase at low but finite temperatures, the line of Neel transitions is calculated for {delta}<<1. For d>2 it is only re-normalized by the anisotropy parameter and varies with the distance to the quantum critical point (QCP) |g| as, T{sub N}{proportional_to}|g|{sup {psi}} where the shift exponent {psi}=1/(d-1). Nevertheless, in two dimensions, a long-range magnetic order occurs only at T=0 for any {delta}<<1. In the paramagnetic phase, we also find a power law temperature dependence on the specific heat at the quantum critical trajectoryJ/t=(J/t){sub c}, T{yields}0. It behaves as C{sub V}{proportional_to}T{sup d} for {delta}{<=}1 and {approx}1, in concordance with the scaling theory for z=1.

  9. The Critical Point Entanglement and Chaos in the Dicke Model

    Directory of Open Access Journals (Sweden)

    Lina Bao

    2015-07-01

    Full Text Available Ground state properties and level statistics of the Dicke model for a finite number of atoms are investigated based on a progressive diagonalization scheme (PDS. Particle number statistics, the entanglement measure and the Shannon information entropy at the resonance point in cases with a finite number of atoms as functions of the coupling parameter are calculated. It is shown that the entanglement measure defined in terms of the normalized von Neumann entropy of the reduced density matrix of the atoms reaches its maximum value at the critical point of the quantum phase transition where the system is most chaotic. Noticeable change in the Shannon information entropy near or at the critical point of the quantum phase transition is also observed. In addition, the quantum phase transition may be observed not only in the ground state mean photon number and the ground state atomic inversion as shown previously, but also in fluctuations of these two quantities in the ground state, especially in the atomic inversion fluctuation.

  10. A porous flow model for the geometrical form of volcanoes - Critical comments

    Science.gov (United States)

    Wadge, G.; Francis, P.

    1982-01-01

    A critical evaluation is presented of the assumptions on which the mathematical model for the geometrical form of a volcano arising from the flow of magma in a porous medium of Lacey et al. (1981) is based. The lack of evidence for an equipotential surface or its equivalent in volcanoes prior to eruption is pointed out, and the preference of volcanic eruptions for low ground is attributed to the local stress field produced by topographic loading rather than a rising magma table. Other difficulties with the model involve the neglect of the surface flow of lava under gravity away from the vent, and the use of the Dupuit approximation for unconfined flow and the assumption of essentially horizontal magma flow. Comparisons of model predictions with the shapes of actual volcanoes reveal the model not to fit lava shield volcanoes, for which the cone represents the solidification of small lava flows, and to provide a poor fit to composite central volcanoes.

  11. Model Predictive Control for an Industrial SAG Mill

    DEFF Research Database (Denmark)

    Ohan, Valeriu; Steinke, Florian; Metzger, Michael

    2012-01-01

    We discuss Model Predictive Control (MPC) based on ARX models and a simple lower order disturbance model. The advantage of this MPC formulation is that it has few tuning parameters and is based on an ARX prediction model that can readily be identied using standard technologies from system identic...

  12. Uncertainties in spatially aggregated predictions from a logistic regression model

    NARCIS (Netherlands)

    Horssen, P.W. van; Pebesma, E.J.; Schot, P.P.

    2002-01-01

    This paper presents a method to assess the uncertainty of an ecological spatial prediction model which is based on logistic regression models, using data from the interpolation of explanatory predictor variables. The spatial predictions are presented as approximate 95% prediction intervals. The

  13. Dealing with missing predictor values when applying clinical prediction models.

    NARCIS (Netherlands)

    Janssen, K.J.; Vergouwe, Y.; Donders, A.R.T.; Harrell Jr, F.E.; Chen, Q.; Grobbee, D.E.; Moons, K.G.

    2009-01-01

    BACKGROUND: Prediction models combine patient characteristics and test results to predict the presence of a disease or the occurrence of an event in the future. In the event that test results (predictor) are unavailable, a strategy is needed to help users applying a prediction model to deal with

  14. Critical, statistical, and thermodynamical properties of lattice models

    Energy Technology Data Exchange (ETDEWEB)

    Varma, Vipin Kerala

    2013-10-15

    In this thesis we investigate zero temperature and low temperature properties - critical, statistical and thermodynamical - of lattice models in the contexts of bosonic cold atom systems, magnetic materials, and non-interacting particles on various lattice geometries. We study quantum phase transitions in the Bose-Hubbard model with higher body interactions, as relevant for optical lattice experiments of strongly interacting bosons, in one and two dimensions; the universality of the Mott insulator to superfluid transition is found to remain unchanged for even large three body interaction strengths. A systematic renormalization procedure is formulated to fully re-sum these higher (three and four) body interactions into the two body terms. In the strongly repulsive limit, we analyse the zero and low temperature physics of interacting hard-core bosons on the kagome lattice at various fillings. Evidence for a disordered phase in the Ising limit of the model is presented; in the strong coupling limit, the transition between the valence bond solid and the superfluid is argued to be first order at the tip of the solid lobe.

  15. Critical, statistical, and thermodynamical properties of lattice models

    International Nuclear Information System (INIS)

    Varma, Vipin Kerala

    2013-10-01

    In this thesis we investigate zero temperature and low temperature properties - critical, statistical and thermodynamical - of lattice models in the contexts of bosonic cold atom systems, magnetic materials, and non-interacting particles on various lattice geometries. We study quantum phase transitions in the Bose-Hubbard model with higher body interactions, as relevant for optical lattice experiments of strongly interacting bosons, in one and two dimensions; the universality of the Mott insulator to superfluid transition is found to remain unchanged for even large three body interaction strengths. A systematic renormalization procedure is formulated to fully re-sum these higher (three and four) body interactions into the two body terms. In the strongly repulsive limit, we analyse the zero and low temperature physics of interacting hard-core bosons on the kagome lattice at various fillings. Evidence for a disordered phase in the Ising limit of the model is presented; in the strong coupling limit, the transition between the valence bond solid and the superfluid is argued to be first order at the tip of the solid lobe.

  16. Avalanches in self-organized critical neural networks: a minimal model for the neural SOC universality class.

    Directory of Open Access Journals (Sweden)

    Matthias Rybarsch

    Full Text Available The brain keeps its overall dynamics in a corridor of intermediate activity and it has been a long standing question what possible mechanism could achieve this task. Mechanisms from the field of statistical physics have long been suggesting that this homeostasis of brain activity could occur even without a central regulator, via self-organization on the level of neurons and their interactions, alone. Such physical mechanisms from the class of self-organized criticality exhibit characteristic dynamical signatures, similar to seismic activity related to earthquakes. Measurements of cortex rest activity showed first signs of dynamical signatures potentially pointing to self-organized critical dynamics in the brain. Indeed, recent more accurate measurements allowed for a detailed comparison with scaling theory of non-equilibrium critical phenomena, proving the existence of criticality in cortex dynamics. We here compare this new evaluation of cortex activity data to the predictions of the earliest physics spin model of self-organized critical neural networks. We find that the model matches with the recent experimental data and its interpretation in terms of dynamical signatures for criticality in the brain. The combination of signatures for criticality, power law distributions of avalanche sizes and durations, as well as a specific scaling relationship between anomalous exponents, defines a universality class characteristic of the particular critical phenomenon observed in the neural experiments. Thus the model is a candidate for a minimal model of a self-organized critical adaptive network for the universality class of neural criticality. As a prototype model, it provides the background for models that may include more biological details, yet share the same universality class characteristic of the homeostasis of activity in the brain.

  17. Avalanches in self-organized critical neural networks: a minimal model for the neural SOC universality class.

    Science.gov (United States)

    Rybarsch, Matthias; Bornholdt, Stefan

    2014-01-01

    The brain keeps its overall dynamics in a corridor of intermediate activity and it has been a long standing question what possible mechanism could achieve this task. Mechanisms from the field of statistical physics have long been suggesting that this homeostasis of brain activity could occur even without a central regulator, via self-organization on the level of neurons and their interactions, alone. Such physical mechanisms from the class of self-organized criticality exhibit characteristic dynamical signatures, similar to seismic activity related to earthquakes. Measurements of cortex rest activity showed first signs of dynamical signatures potentially pointing to self-organized critical dynamics in the brain. Indeed, recent more accurate measurements allowed for a detailed comparison with scaling theory of non-equilibrium critical phenomena, proving the existence of criticality in cortex dynamics. We here compare this new evaluation of cortex activity data to the predictions of the earliest physics spin model of self-organized critical neural networks. We find that the model matches with the recent experimental data and its interpretation in terms of dynamical signatures for criticality in the brain. The combination of signatures for criticality, power law distributions of avalanche sizes and durations, as well as a specific scaling relationship between anomalous exponents, defines a universality class characteristic of the particular critical phenomenon observed in the neural experiments. Thus the model is a candidate for a minimal model of a self-organized critical adaptive network for the universality class of neural criticality. As a prototype model, it provides the background for models that may include more biological details, yet share the same universality class characteristic of the homeostasis of activity in the brain.

  18. Childhood Maltreatment, Shame-Proneness and Self-Criticism in Social Anxiety Disorder: A Sequential Mediational Model.

    Science.gov (United States)

    Shahar, Ben; Doron, Guy; Szepsenwol, Ohad

    2015-01-01

    Previous research has shown a robust link between emotional abuse and neglect with social anxiety symptoms. However, the mechanisms through which these links operate are less clear. We hypothesized a model in which early experiences of abuse and neglect create aversive shame states, internalized into a stable shame-based cognitive-affective schema. Self-criticism is conceptualized as a safety strategy designed to conceal flaws and prevent further experiences of shame. However, self-criticism maintains negative self-perceptions and insecurity in social situations. To provide preliminary, cross-sectional support for this model, a nonclinical community sample of 219 adults from Israel (110 females, mean age = 38.7) completed measures of childhood trauma, shame-proneness, self-criticism and social anxiety symptoms. A sequential mediational model showed that emotional abuse, but not emotional neglect, predicted shame-proneness, which in turn predicted self-criticism, which in turn predicted social anxiety symptoms. These results provide initial evidence supporting the role of shame and self-criticism in the development and maintenance of social anxiety disorder. The clinical implications of these findings are discussed. Previous research has shown that histories of emotional abuse and emotional neglect predict social anxiety symptoms, but the mechanisms that underlie these associations are not clear. Using psycho-evolutionary and emotion-focused perspectives, the findings of the current study suggest that shame and self-criticism play an important role in social anxiety and may mediate the link between emotional abuse and symptoms. These findings also suggest that therapeutic interventions specifically targeting shame and self-criticism should be incorporated into treatments for social anxiety, especially with socially anxious patients with abuse histories. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  20. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  1. Critical ingredients of Type Ia supernova radiative-transfer modelling

    Science.gov (United States)

    Dessart, Luc; Hillier, D. John; Blondin, Stéphane; Khokhlov, Alexei

    2014-07-01

    We explore the physics of Type Ia supernova (SN Ia) light curves and spectra using the 1D non-local thermodynamic equilibrium (non-LTE) time-dependent radiative-transfer code CMFGEN. Rather than adjusting ejecta properties to match observations, we select as input one `standard' 1D Chandrasekhar-mass delayed-detonation hydrodynamical model, and then explore the sensitivity of radiation and gas properties of the ejecta on radiative-transfer modelling assumptions. The correct computation of SN Ia radiation is not exclusively a solution to an `opacity problem', characterized by the treatment of a large number of lines. We demonstrate that the key is to identify and treat important atomic processes consistently. This is not limited to treating line blanketing in non-LTE. We show that including forbidden-line transitions of metals, and in particular Co, is increasingly important for the temperature and ionization of the gas beyond maximum light. Non-thermal ionization and excitation are also critical since they affect the colour evolution and the ΔM15 decline rate of our model. While impacting little the bolometric luminosity, a more complete treatment of decay routes leads to enhanced line blanketing, e.g. associated with 48Ti in the U and B bands. Overall, we find that SN Ia radiation properties are influenced in a complicated way by the atomic data we employ, so that obtaining converged results is a real challenge. Nonetheless, with our fully fledged CMFGEN model, we obtain good agreement with the golden standard Type Ia SN 2005cf in the optical and near-IR, from 5 to 60 d after explosion, suggesting that assuming spherical symmetry is not detrimental to SN Ia radiative-transfer modelling at these times. Multi-D effects no doubt matter, but they are perhaps less important than accurately treating the non-LTE processes that are crucial to obtain reliable temperature and ionization structures.

  2. Predictive capabilities of various constitutive models for arterial tissue.

    Science.gov (United States)

    Schroeder, Florian; Polzer, Stanislav; Slažanský, Martin; Man, Vojtěch; Skácel, Pavel

    2018-02-01

    Aim of this study is to validate some constitutive models by assessing their capabilities in describing and predicting uniaxial and biaxial behavior of porcine aortic tissue. 14 samples from porcine aortas were used to perform 2 uniaxial and 5 biaxial tensile tests. Transversal strains were furthermore stored for uniaxial data. The experimental data were fitted by four constitutive models: Holzapfel-Gasser-Ogden model (HGO), model based on generalized structure tensor (GST), Four-Fiber-Family model (FFF) and Microfiber model. Fitting was performed to uniaxial and biaxial data sets separately and descriptive capabilities of the models were compared. Their predictive capabilities were assessed in two ways. Firstly each model was fitted to biaxial data and its accuracy (in term of R 2 and NRMSE) in prediction of both uniaxial responses was evaluated. Then this procedure was performed conversely: each model was fitted to both uniaxial tests and its accuracy in prediction of 5 biaxial responses was observed. Descriptive capabilities of all models were excellent. In predicting uniaxial response from biaxial data, microfiber model was the most accurate while the other models showed also reasonable accuracy. Microfiber and FFF models were capable to reasonably predict biaxial responses from uniaxial data while HGO and GST models failed completely in this task. HGO and GST models are not capable to predict biaxial arterial wall behavior while FFF model is the most robust of the investigated constitutive models. Knowledge of transversal strains in uniaxial tests improves robustness of constitutive models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Comparing National Water Model Inundation Predictions with Hydrodynamic Modeling

    Science.gov (United States)

    Egbert, R. J.; Shastry, A.; Aristizabal, F.; Luo, C.

    2017-12-01

    The National Water Model (NWM) simulates the hydrologic cycle and produces streamflow forecasts, runoff, and other variables for 2.7 million reaches along the National Hydrography Dataset for the continental United States. NWM applies Muskingum-Cunge channel routing which is based on the continuity equation. However, the momentum equation also needs to be considered to obtain better estimates of streamflow and stage in rivers especially for applications such as flood inundation mapping. Simulation Program for River NeTworks (SPRNT) is a fully dynamic model for large scale river networks that solves the full nonlinear Saint-Venant equations for 1D flow and stage height in river channel networks with non-uniform bathymetry. For the current work, the steady-state version of the SPRNT model was leveraged. An evaluation on SPRNT's and NWM's abilities to predict inundation was conducted for the record flood of Hurricane Matthew in October 2016 along the Neuse River in North Carolina. This event was known to have been influenced by backwater effects from the Hurricane's storm surge. Retrospective NWM discharge predictions were converted to stage using synthetic rating curves. The stages from both models were utilized to produce flood inundation maps using the Height Above Nearest Drainage (HAND) method which uses the local relative heights to provide a spatial representation of inundation depths. In order to validate the inundation produced by the models, Sentinel-1A synthetic aperture radar data in the VV and VH polarizations along with auxiliary data was used to produce a reference inundation map. A preliminary, binary comparison of the inundation maps to the reference, limited to the five HUC-12 areas of Goldsboro, NC, yielded that the flood inundation accuracies for NWM and SPRNT were 74.68% and 78.37%, respectively. The differences for all the relevant test statistics including accuracy, true positive rate, true negative rate, and positive predictive value were found

  4. A Multiscale Modeling System: Developments, Applications, and Critical Issues

    Science.gov (United States)

    Tao, Wei-Kuo; Lau, William; Simpson, Joanne; Chern, Jiun-Dar; Atlas, Robert; Khairoutdinov, David Randall Marat; Li, Jui-Lin; Waliser, Duane E.; Jiang, Jonathan; Hou, Arthur; hide

    2009-01-01

    The foremost challenge in parameterizing convective clouds and cloud systems in large-scale models are the many coupled dynamical and physical processes that interact over a wide range of scales, from microphysical scales to the synoptic and planetary scales. This makes the comprehension and representation of convective clouds and cloud systems one of the most complex scientific problems in Earth science. During the past decade, the Global Energy and Water Cycle Experiment (GEWEX) Cloud System Study (GCSS) has pioneered the use of single-column models (SCMs) and cloud-resolving models (CRMs) for the evaluation of the cloud and radiation parameterizations in general circulation models (GCMs; e.g., GEWEX Cloud System Science Team 1993). These activities have uncovered many systematic biases in the radiation, cloud and convection parameterizations of GCMs and have led to the development of new schemes (e.g., Zhang 2002; Pincus et al, 2003; Zhang and Wu 2003; Wu et al. 2003; Liang and Wu 2005; Wu and Liang 2005, and others). Comparisons between SCMs and CRMs using the same large-scale forcing derived from field campaigns have demonstrated that CRMs are superior to SCMs in the prediction of temperature and moisture tendencies (e.g., Das et al. 1999; Randall et al 2003b; Xie et al. 2005).

  5. Burnout and posttraumatic stress in paediatric critical care personnel: Prediction from resilience and coping styles.

    Science.gov (United States)

    Rodríguez-Rey, Rocío; Palacios, Alba; Alonso-Tapia, Jesús; Pérez, Elena; Álvarez, Elena; Coca, Ana; Mencía, Santiago; Marcos, Ana; Mayordomo-Colunga, Juan; Fernández, Francisco; Gómez, Fernando; Cruz, Jaime; Ordóñez, Olga; Llorente, Ana

    2018-03-28

    Our aims were (1) to explore the prevalence of burnout syndrome (BOS) and posttraumatic stress disorder (PTSD) in a sample of Spanish staff working in the paediatric intensive care unit (PICU) and compare these rates with a sample of general paediatric staff and (2) to explore how resilience, coping strategies, and professional and demographic variables influence BOS and PTSD. This is a multicentre, cross-sectional study. Data were collected in the PICU and in other paediatric wards of nine hospitals. Participants consisted of 298 PICU staff members (57 physicians, 177 nurses, and 64 nursing assistants) and 189 professionals working in non-critical paediatric units (53 physicians, 104 nurses, and 32 nursing assistants). They completed the Brief Resilience Scale, the Coping Strategies Questionnaire for healthcare providers, the Maslach Burnout Inventory, and the Trauma Screening Questionnaire. Fifty-six percent of PICU working staff reported burnout in at least one dimension (36.20% scored over the cut-off for emotional exhaustion, 27.20% for depersonalisation, and 20.10% for low personal accomplishment), and 20.1% reported PTSD. There were no differences in burnout and PTSD scores between PICU and non-PICU staff members, either among physicians, nurses, or nursing assistants. Higher burnout and PTSD rates emerged after the death of a child and/or conflicts with patients/families or colleagues. Around 30% of the variance in BOS and PTSD is predicted by a frequent usage of the emotion-focused coping style and an infrequent usage of the problem-focused coping style. Interventions to prevent and treat distress among paediatric staff members are needed and should be focused on: (i) promoting active emotional processing of traumatic events and encouraging positive thinking; (ii) developing a sense of detached concern; (iii) improving the ability to solve interpersonal conflicts, and (iv) providing adequate training in end-of-life care. Copyright © 2018 Australian

  6. Modelling critical thinking through learning-oriented assessment ...

    African Journals Online (AJOL)

    In two independent studies, the Cornell Critical Thinking Test – Level Z and the Watson Glaser Critical Thinking Appraisal were administered to gauge the critical thinking abilities of teacher education students. The research results obtained from the two studies will be briefly discussed as evidence that there is a dire need to ...

  7. Establishment of tensile failure induced sanding onset prediction model for cased-perforated gas wells

    Directory of Open Access Journals (Sweden)

    Mohammad Tabaeh Hayavi

    2017-04-01

    Full Text Available Sand production is a challenging issue in upstream oil and gas industry, causing operational and safety problems. Therefore, before drilling the wells, it is essential to predict and evaluate sanding onset of the wells. In this paper, new poroelastoplastic stress solutions around the perforation tunnel and tip based on the Mohr–Coulomb criterion are presented firstly. Based on the stress models, a tensile failure induced sanding onset prediction model for cased-perforated gas wells is derived. Then the analytical model is applied to field data to verify its applicability. The results from the perforation tip tensile failure induced sanding model are very close to field data. Therefore, this model is recommended for forecasting the critical conditions of sand production analysis. Such predictions are necessary for providing technical support for sand control decision-making and predicting the production condition at which sanding onset occurs.

  8. Critical-Inquiry-Based-Learning: Model of Learning to Promote Critical Thinking Ability of Pre-service Teachers

    Science.gov (United States)

    Prayogi, S.; Yuanita, L.; Wasis

    2018-01-01

    This study aimed to develop Critical-Inquiry-Based-Learning (CIBL) learning model to promote critical thinking (CT) ability of preservice teachers. The CIBL learning model was developed by meeting the criteria of validity, practicality, and effectiveness. Validation of the model involves 4 expert validators through the mechanism of the focus group discussion (FGD). CIBL learning model declared valid to promote CT ability, with the validity level (Va) of 4.20 and reliability (r) of 90,1% (very reliable). The practicality of the model was evaluated when it was implemented that involving 17 of preservice teachers. The CIBL learning model had been declared practice, its measuring from learning feasibility (LF) with very good criteria (LF-score = 4.75). The effectiveness of the model was evaluated from the improvement CT ability after the implementation of the model. CT ability were evaluated using the scoring technique adapted from Ennis-Weir Critical Thinking Essay Test. The average score of CT ability on pretest is - 1.53 (uncritical criteria), whereas on posttest is 8.76 (critical criteria), with N-gain score of 0.76 (high criteria). Based on the results of this study, it can be concluded that developed CIBL learning model is feasible to promote CT ability of preservice teachers.

  9. Predictive models for moving contact line flows

    Science.gov (United States)

    Rame, Enrique; Garoff, Stephen

    2003-01-01

    Modeling flows with moving contact lines poses the formidable challenge that the usual assumptions of Newtonian fluid and no-slip condition give rise to a well-known singularity. This singularity prevents one from satisfying the contact angle condition to compute the shape of the fluid-fluid interface, a crucial calculation without which design parameters such as the pressure drop needed to move an immiscible 2-fluid system through a solid matrix cannot be evaluated. Some progress has been made for low Capillary number spreading flows. Combining experimental measurements of fluid-fluid interfaces very near the moving contact line with an analytical expression for the interface shape, we can determine a parameter that forms a boundary condition for the macroscopic interface shape when Ca much les than l. This parameter, which plays the role of an "apparent" or macroscopic dynamic contact angle, is shown by the theory to depend on the system geometry through the macroscopic length scale. This theoretically established dependence on geometry allows this parameter to be "transferable" from the geometry of the measurement to any other geometry involving the same material system. Unfortunately this prediction of the theory cannot be tested on Earth.

  10. Developmental prediction model for early alcohol initiation in Dutch adolescents

    NARCIS (Netherlands)

    Geels, L.M.; Vink, J.M.; Beijsterveldt, C.E.M. van; Bartels, M.; Boomsma, D.I.

    2013-01-01

    Objective: Multiple factors predict early alcohol initiation in teenagers. Among these are genetic risk factors, childhood behavioral problems, life events, lifestyle, and family environment. We constructed a developmental prediction model for alcohol initiation below the Dutch legal drinking age

  11. Incorporating Retention Time to Refine Models Predicting Thermal Regimes of Stream Networks Across New England

    Science.gov (United States)

    Thermal regimes are a critical factor in models predicting effects of watershed management activities on fish habitat suitability. We have assembled a database of lotic temperature time series across New England (> 7000 station-year combinations) from state and Federal data s...

  12. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  13. Integrated Modeling for Road Condition Prediction

    Science.gov (United States)

    2017-12-31

    Transportation Systems Management and Operations (TSMO) is at a critical point in its development due to an explosion in data availability and analytics. Intelligent transportation systems (ITS) gathering data about weather and traffic conditions cou...

  14. MODELLING OF DYNAMIC SPEED LIMITS USING THE MODEL PREDICTIVE CONTROL

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available The article considers the issues of traffic management using intelligent system “Car-Road” (IVHS, which consist of interacting intelligent vehicles (IV and intelligent roadside controllers. Vehicles are organized in convoy with small distances between them. All vehicles are assumed to be fully automated (throttle control, braking, steering. Proposed approaches for determining speed limits for traffic cars on the motorway using a model predictive control (MPC. The article proposes an approach to dynamic speed limit to minimize the downtime of vehicles in traffic.

  15. A critical discussion on the applicability of Compound Topographic Index (CTI) for predicting ephemeral gully erosion

    Science.gov (United States)

    Casalí, Javier; Chahor, Youssef; Giménez, Rafael; Campo-Bescós, Miguel

    2016-04-01

    The so-called Compound Topographic Index (CTI) can be calculated for each grid cell in a DEM and be used to identify potential locations for ephemeral gullies (e. g.) based on land topography (CTI = A.S.PLANC, where A is upstream drainage area, S is local slope and PLANC is planform curvature, a measure of the landscape convergence) (Parker et al., 2007). It can be shown that CTI represents stream power per unit bed area and it considers the major parameters controlling the pattern and intensity of concentrated surface runoff in the field (Parker et al., 2007). However, other key variables controlling e.g. erosion (e. g. e.) such as soil characteristics, land-use and management, are not had into consideration. The critical CTI value (CTIc) "represents the intensity of concentrated overland flow necessary to initiate erosion and channelised flow under a given set of circumstances" (Parker et al., 2007). AnnAGNPS (Annualized Agriculture Non-Point Source) pollution model is an important management tool developed by (USDA) and uses CTI to locate potential ephemeral gullies. Then, and depending on rainfall characteristics of the period simulated by AnnAGNPS, potential e. g. can become "actual", and be simulated by the model accordingly. This paper presents preliminary results and a number of considerations after evaluating the CTI tool in Navarre. CTIc values found are similar to those cited by other authors, and the e. g. networks that on average occur in the area have been located reasonably well. After our experience we believe that it is necessary to distinguish between the CTIc corresponding to the location of headcuts whose migrations originate the e. g. (CTIc1); and the CTIc necessary to represent the location of the gully networks in the watershed (CTIc2), where gully headcuts are located in the upstream end of the gullies. Most scientists only consider one CTIc value, although, from our point of view, the two situations are different. CTIc1 would represent the

  16. Risk-Based Predictive Maintenance for Safety-Critical Systems by Using Probabilistic Inference

    Directory of Open Access Journals (Sweden)

    Tianhua Xu

    2013-01-01

    Full Text Available Risk-based maintenance (RBM aims to improve maintenance planning and decision making by reducing the probability and consequences of failure of equipment. A new predictive maintenance strategy that integrates dynamic evolution model and risk assessment is proposed which can be used to calculate the optimal maintenance time with minimal cost and safety constraints. The dynamic evolution model provides qualified risks by using probabilistic inference with bucket elimination and gives the prospective degradation trend of a complex system. Based on the degradation trend, an optimal maintenance time can be determined by minimizing the expected maintenance cost per time unit. The effectiveness of the proposed method is validated and demonstrated by a collision accident of high-speed trains with obstacles in the presence of safety and cost constrains.

  17. Predictability in models of the atmospheric circulation

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error

  18. Predictive models and prognostic factors for upper tract urothelial carcinoma: a comprehensive review of the literature.

    Science.gov (United States)

    Mbeutcha, Aurélie; Mathieu, Romain; Rouprêt, Morgan; Gust, Kilian M; Briganti, Alberto; Karakiewicz, Pierre I; Shariat, Shahrokh F

    2016-10-01

    In the context of customized patient care for upper tract urothelial carcinoma (UTUC), decision-making could be facilitated by risk assessment and prediction tools. The aim of this study was to provide a critical overview of existing predictive models and to review emerging promising prognostic factors for UTUC. A literature search of articles published in English from January 2000 to June 2016 was performed using PubMed. Studies on risk group stratification models and predictive tools in UTUC were selected, together with studies on predictive factors and biomarkers associated with advanced-stage UTUC and oncological outcomes after surgery. Various predictive tools have been described for advanced-stage UTUC assessment, disease recurrence and cancer-specific survival (CSS). Most of these models are based on well-established prognostic factors such as tumor stage, grade and lymph node (LN) metastasis, but some also integrate newly described prognostic factors and biomarkers. These new prediction tools seem to reach a high level of accuracy, but they lack external validation and decision-making analysis. The combinations of patient-, pathology- and surgery-related factors together with novel biomarkers have led to promising predictive tools for oncological outcomes in UTUC. However, external validation of these predictive models is a prerequisite before their introduction into daily practice. New models predicting response to therapy are urgently needed to allow accurate and safe individualized management in this heterogeneous disease.

  19. Modeling critical zone processes in intensively managed environments

    Science.gov (United States)

    Kumar, Praveen; Le, Phong; Woo, Dong; Yan, Qina

    2017-04-01

    Processes in the Critical Zone (CZ), which sustain terrestrial life, are tightly coupled across hydrological, physical, biochemical, and many other domains over both short and long timescales. In addition, vegetation acclimation resulting from elevated atmospheric CO2 concentration, along with response to increased temperature and altered rainfall pattern, is expected to result in emergent behaviors in ecologic and hydrologic functions, subsequently controlling CZ processes. We hypothesize that the interplay between micro-topographic variability and these emergent behaviors will shape complex responses of a range of ecosystem dynamics within the CZ. Here, we develop a modeling framework ('Dhara') that explicitly incorporates micro-topographic variability based on lidar topographic data with coupling of multi-layer modeling of the soil-vegetation continuum and 3-D surface-subsurface transport processes to study ecological and biogeochemical dynamics. We further couple a C-N model with a physically based hydro-geomorphologic model to quantify (i) how topographic variability controls the spatial distribution of soil moisture, temperature, and biogeochemical processes, and (ii) how farming activities modify the interaction between soil erosion and soil organic carbon (SOC) dynamics. To address the intensive computational demand from high-resolution modeling at lidar data scale, we use a hybrid CPU-GPU parallel computing architecture run over large supercomputing systems for simulations. Our findings indicate that rising CO2 concentration and air temperature have opposing effects on soil moisture, surface water and ponding in topographic depressions. Further, the relatively higher soil moisture and lower soil temperature contribute to decreased soil microbial activities in the low-lying areas due to anaerobic conditions and reduced temperatures. The decreased microbial relevant processes cause the reduction of nitrification rates, resulting in relatively lower nitrate

  20. Theoretical Models of Deliberative Democracy: A Critical Analysis

    Directory of Open Access Journals (Sweden)

    Tutui Viorel

    2015-07-01

    Full Text Available Abstract: My paper focuses on presenting and analyzing some of the most important theoretical models of deliberative democracy and to emphasize their limits. Firstly, I will mention James Fishkin‟s account of deliberative democracy and its relations with other democratic models. He differentiates between four democratic theories: competitive democracy, elite deliberation, participatory democracy and deliberative democracy. Each of these theories makes an explicit commitment to two of the following four “principles”: political equality, participation, deliberation, nontyranny. Deliberative democracy is committed to political equality and deliberation. Secondly, I will present Philip Pettit‟s view concerning the main constraints of deliberative democracy: the inclusion constraint, the judgmental constraint and the dialogical constraint. Thirdly, I will refer to Amy Gutmann and Dennis Thompson‟s conception regarding the “requirements” or characteristics of deliberative democracy: the reason-giving requirement, the accessibility of reasons, the binding character of the decisions and the dynamic nature of the deliberative process. Finally, I will discuss Joshua Cohen‟s “ideal deliberative procedure” which has the following features: it is free, reasoned, the parties are substantively equal and the procedure aims to arrive at rationally motivated consensus. After presenting these models I will provide a critical analysis of each one of them with the purpose of revealing their virtues and limits. I will make some suggestions in order to combine the virtues of these models, to transcend their limitations and to offer a more systematical account of deliberative democracy. In the next four sections I will take into consideration four main strategies for combining political and epistemic values (“optimistic”, “deliberative”, “democratic” and “pragmatic” and the main objections they have to face. In the concluding section

  1. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  2. Models for predicting compressive strength and water absorption of ...

    African Journals Online (AJOL)

    This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...

  3. Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity

    International Nuclear Information System (INIS)

    Li Harbin; McNulty, Steven G.

    2007-01-01

    Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BC w ; 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BC w base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL. - A comprehensive uncertainty analysis, with advanced techniques and full list and full value ranges of all individual parameters, was used to examine a simple mass balance model and address questions of error partition and uncertainty reduction in critical acid load estimates that were not fully answered by previous studies

  4. Assessment of continuum mechanics models in predicting buckling strains of single-walled carbon nanotubes.

    Science.gov (United States)

    Zhang, Y Y; Wang, C M; Duan, W H; Xiang, Y; Zong, Z

    2009-09-30

    This paper presents an assessment of continuum mechanics (beam and cylindrical shell) models in the prediction of critical buckling strains of axially loaded single-walled carbon nanotubes (SWCNTs). Molecular dynamics (MD) simulation results for SWCNTs with various aspect (length-to-diameter) ratios and diameters will be used as the reference solutions for this assessment exercise. From MD simulations, two distinct buckling modes are observed, i.e. the shell-type buckling mode, when the aspect ratios are small, and the beam-type mode, when the aspect ratios are large. For moderate aspect ratios, the SWCNTs buckle in a mixed beam-shell mode. Therefore one chooses either the beam or the shell model depending on the aspect ratio of the carbon nanotubes (CNTs). It will be shown herein that for SWCNTs with long aspect ratios, the local Euler beam results are comparable to MD simulation results carried out at room temperature. However, when the SWCNTs have moderate aspect ratios, it is necessary to use the more refined nonlocal beam theory or the Timoshenko beam model for a better prediction of the critical strain. For short SWCNTs with large diameters, the nonlocal shell model with the appropriate small length scale parameter can provide critical strains that are in good agreement with MD results. However, for short SWCNTs with small diameters, more work has to be done to refine the nonlocal cylindrical shell model for better prediction of critical strains.

  5. Prediction models for drug-induced hepatotoxicity by using weighted molecular fingerprints.

    Science.gov (United States)

    Kim, Eunyoung; Nam, Hojung

    2017-05-31

    Drug-induced liver injury (DILI) is a critical issue in drug development because DILI causes failures in clinical trials and the withdrawal of approved drugs from the market. There have been many attempts to predict the risk of DILI based on in vivo and in silico identification of hepatotoxic compounds. In the current study, we propose the in silico prediction model predicting DILI using weighted molecular fingerprints. In this study, we used 881 bits of molecular fingerprint and used as features describing presence or absence of each substructure of compounds. Then, the Bayesian probability of each substructure was calculated and labeled (positive or negative for DILI), and a weighted fingerprint was determined from the ratio of DILI-positive to DILI-negative probability values. Using weighted fingerprint features, the prediction models were trained and evaluated with the Random Forest (RF) and Support Vector Machine (SVM) algorithms. The constructed models yielded accuracies of 73.8% and 72.6%, AUCs of 0.791 and 0.768 in cross-validation. In independent tests, models achieved accuracies of 60.1% and 61.1% for RF and SVM, respectively. The results validated that weighted features helped increase overall performance of prediction models. The constructed models were further applied to the prediction of natural compounds in herbs to identify DILI potential, and 13,996 unique herbal compounds were predicted as DILI-positive with the SVM model. The prediction models with weighted features increased the performance compared to non-weighted models. Moreover, we predicted the DILI potential of herbs with the best performed model, and the prediction results suggest that many herbal compounds could have potential to be DILI. We can thus infer that taking natural products without detailed references about the relevant pathways may be dangerous. Considering the frequency of use of compounds in natural herbs and their increased application in drug development, DILI labeling

  6. Improving Critical Thinking Skills of College Students through RMS Model for Learning Basic Concepts in Science

    Science.gov (United States)

    Muhlisin, Ahmad; Susilo, Herawati; Amin, Mohamad; Rohman, Fatchur

    2016-01-01

    The purposes of this study were to: 1) Examine the effect of RMS learning model towards critical thinking skills. 2) Examine the effect of different academic abilities against critical thinking skills. 3) Examine the effect of the interaction between RMS learning model and different academic abilities against critical thinking skills. The research…

  7. Regression models for predicting anthropometric measurements of ...

    African Journals Online (AJOL)

    measure anthropometric dimensions to predict difficult-to-measure dimensions required for ergonomic design of school furniture. A total of 143 students aged between 16 and 18 years from eight public secondary schools in Ogbomoso, Nigeria ...

  8. FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...

    African Journals Online (AJOL)

    direction (σx) had a maximum value of 375MPa (tensile) and minimum value of ... These results shows that the residual stresses obtained by prediction from the finite element method are in fair agreement with the experimental results.

  9. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...... visualization to improve our understanding of the different attained performances, effectively compiling all the conducted experiments in a meaningful way. We complete our study with an entropy-based analysis that highlights the uncertainty handling properties provided by the GP, crucial for prediction tasks...

  10. Present state of vapour pressure measurements up to 5000 K, and critical point data prediction of uranium oxide

    International Nuclear Information System (INIS)

    Ohse, R.W.; Babelot, J.F.; Cercignani, C.; Kinsman, P.R.; Long, K.A.; Magill, J.; Scotti, A.

    1979-01-01

    A new dynamic laser pulse heating technique, allowing thermophysical property measurement and equation of state studies above 3000 K is described. The vapour pressure measurements of uranium oxide up to 5000 K, as required for reactor safety analysis are presented. The present state of experimental work above the melting point is summarised. A complete survey on predicted critical point data of uranium oxides reviewing the various theoretical models is given. The various dynamic pulse heating techniques are outlined. For a study of the high temperature vapours and the gas dynamic expansion phenomena of the gas jet, the laser surface heating equipment has been extended to include high speed diagnostics such as multi-channel spectroscopy, time of flight mass spectrometry, and image converter photography in both the framing and streak recording mode. The evaporation process and thermodynamic interpretation of the data are discussed. A kinetic theory description of the laser induced vapour jet using a monoatomic gas dynamical model is given. The optical absorption in the gas jet, giving an upper temperature limit for the applicability of optical pyrometry has been calculated. The reduction of ionisation potential was found to be of minor importance. (Auth.)

  11. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  12. A Critical Analysis of Measurement Models of Export Performance

    Directory of Open Access Journals (Sweden)

    Jorge Carneiro

    2007-05-01

    Full Text Available Poor conceptualization of the export performance construct may undermine theory development efforts and may be one of the reasons behind the often conflicting findings in empirical research on the export performance phenomenon. This article reviews the conceptual and empirical literature and proposes a new analytical scheme that may serve as a standard for judging content validity and a guiding yardstick for drawing operational representations of the construct. A critical assessment of some of the most frequently cited measurement frameworks, followed by an analysis of recent (1999-2004 empirical research, leaves no doubt that there are flaws in the conceptualization and operationalization of the performance construct that ought to be addressed. A new measurement model is advanced along with some guidelines which are suggested for its future empirical validation. The new measurement framework allegedly improves on other past efforts in terms of breadth of coverage of the construct’s domain (content validity. It also offers a measurement perspective (with the simultaneous use of both formative and reflective approaches that appears to reflect better the nature of the construct.

  13. From Predictive Models to Instructional Policies

    Science.gov (United States)

    Rollinson, Joseph; Brunskill, Emma

    2015-01-01

    At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…

  14. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model

  15. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euro per km per year [1]. Aiming to reduce such maintenance expenditure, this paper...

  16. White Collar Criminality: A Prediction Model

    Science.gov (United States)

    1991-01-01

    social disorganization, and conflict; sociological criminology takes a critical stance toward the society itself as generator of criminal conduct (Hagan...According to the theory, the environment contaminates the individual by promoting the internalization of criminalistic patterns, or by failing to...cited in Buikhuisen & Mednick, 1988, p. 9). 3 Psychology can contribute to criminology research by the application of its theories, both the

  17. Predicting Critical Thinking Skills of University Students through Metacognitive Self-Regulation Skills and Chemistry Self-Efficacy

    Science.gov (United States)

    Uzuntiryaki-Kondakci, Esen; Capa-Aydin, Yesim

    2013-01-01

    This study aimed at examining the extent to which metacognitive self-regulation and chemistry self-efficacy predicted critical thinking. Three hundred sixty-five university students participated in the study. Data were collected using appropriate dimensions of Motivated Strategies for Learning Questionnaire and College Chemistry Self-Efficacy…

  18. Artificial pancreas: model predictive control design from clinical experience.

    Science.gov (United States)

    Toffanin, Chiara; Messori, Mirko; Di Palma, Federico; De Nicolao, Giuseppe; Cobelli, Claudio; Magni, Lalo

    2013-11-01

    The objective of this research is to develop a new artificial pancreas that takes into account the experience accumulated during more than 5000 h of closed-loop control in several clinical research centers. The main objective is to reduce the mean glucose value without exacerbating hypo phenomena. Controller design and in silico testing were performed on a new virtual population of the University of Virginia/Padova simulator. A new sensor model was developed based on the Comparison of Two Artificial Pancreas Systems for Closed-Loop Blood Glucose Control versus Open-Loop Control in Patients with Type 1 Diabetes trial AP@home data. The Kalman filter incorporated in the controller has been tuned using plasma and pump insulin as well as plasma and continuous glucose monitoring measures collected in clinical research centers. New constraints describing clinical knowledge not incorporated in the simulator but very critical in real patients (e.g., pump shutoff) have been introduced. The proposed model predictive control (MPC) is characterized by a low computational burden and memory requirements, and it is ready for an embedded implementation. The new MPC was tested with an intensive simulation study on the University of Virginia/Padova simulator equipped with a new virtual population. It was also used in some preliminary outpatient pilot trials. The obtained results are very promising in terms of mean glucose and number of patients in the critical zone of the control variability grid analysis. The proposed MPC improves on the performance of a previous controller already tested in several experiments in the AP@home and JDRF projects. This algorithm complemented with a safety supervision module is a significant step toward deploying artificial pancreases into outpatient environments for extended periods of time. © 2013 Diabetes Technology Society.

  19. Looking through the Critical Lens: The Global Learning Organisation Model

    Science.gov (United States)

    Akella, Devi

    2009-01-01

    This article reconceptualises the meaning of critical theory and its tools of emancipation and critique within the subjective content of cross-cultural literature, globalisation and learning organisation. The first part of the article reviews literature on globalisation and learning companies. The second part discusses the critical approach and…

  20. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  1. Modelling critical degrees of saturation of porous building materials subjected to freezing

    DEFF Research Database (Denmark)

    Hansen, Ernst Jan De Place

    1996-01-01

    of SCR based on fracture mechanics and phase geometry of two-phase materials has been developed.The degradation is modelled as being caused by different eigenstrains of the pore phase and the solid phase when freezing, leading to stress concentrations and crack propagation. Simplifications are made......Frost resistance of porous materials can be characterized by the critical degree of saturation, SCR, and the actual degree of saturation, SACT. An experimental determination of SCR is very laborious and therefore only seldom used when testing frost resistance. A theoretical model for prediction...... to describe the development of stresses and the pore structure, because a mathematical description of the physical theories explaining the process of freezing of water in porous materials is lacking.Calculations are based on porosity, modulus of elasticity and tensile strength, and parameters characterizing...

  2. Quantifying and modelling the carbon sequestration capacity of seagrass meadows--a critical assessment.

    Science.gov (United States)

    Macreadie, P I; Baird, M E; Trevathan-Tackett, S M; Larkum, A W D; Ralph, P J

    2014-06-30

    Seagrasses are among the planet's most effective natural ecosystems for sequestering (capturing and storing) carbon (C); but if degraded, they could leak stored C into the atmosphere and accelerate global warming. Quantifying and modelling the C sequestration capacity is therefore critical for successfully managing seagrass ecosystems to maintain their substantial abatement potential. At present, there is no mechanism to support carbon financing linked to seagrass. For seagrasses to be recognised by the IPCC and the voluntary C market, standard stock assessment methodologies and inventories of seagrass C stocks are required. Developing accurate C budgets for seagrass meadows is indeed complex; we discuss these complexities, and, in addition, we review techniques and methodologies that will aid development of C budgets. We also consider a simple process-based data assimilation model for predicting how seagrasses will respond to future change, accompanied by a practical list of research priorities. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. A model to predict the beginning of the pollen season

    DEFF Research Database (Denmark)

    Toldam-Andersen, Torben Bo

    1991-01-01

    In order to predict the beginning of the pollen season, a model comprising the Utah phenoclirnatography Chill Unit (CU) and ASYMCUR-Growing Degree Hour (GDH) submodels were used to predict the first bloom in Alms, Ulttirrs and Berirln. The model relates environmental temperatures to rest completion...... and bud development. As phenologic parameter 14 years of pollen counts were used. The observed datcs for the beginning of the pollen seasons were defined from the pollen counts and compared with the model prediction. The CU and GDH submodels were used as: 1. A fixed day model, using only the GDH model...... for fruit trees are generally applicable, and give a reasonable description of the growth processes of other trees. This type of model can therefore be of value in predicting the start of the pollen season. The predicted dates were generally within 3-5 days of the observed. Finally the possibility of frost...

  4. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  5. Evaluation of the US Army fallout prediction model

    International Nuclear Information System (INIS)

    Pernick, A.; Levanon, I.

    1987-01-01

    The US Army fallout prediction method was evaluated against an advanced fallout prediction model--SIMFIC (Simplified Fallout Interpretive Code). The danger zone areas of the US Army method were found to be significantly greater (up to a factor of 8) than the areas of corresponding radiation hazard as predicted by SIMFIC. Nonetheless, because the US Army's method predicts danger zone lengths that are commonly shorter than the corresponding hot line distances of SIMFIC, the US Army's method is not reliably conservative

  6. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  7. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...

  8. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of cowpea yield-water use and weather data were collected.

  9. Improvements of Critical Heat Flux Models Based on the Viscous Potential Flow Theory

    International Nuclear Information System (INIS)

    Kim, Byoung Jae; Lee, Jong Hyuk; Kim, Kyung Doo

    2014-01-01

    The absence of fluid viscosities in most existing models may be attributed to the fact that inviscid flow analyses are performed for the model development. For example, the hydrodynamic theory and macrolayer dryout models rely on the Rayleigh-Taylor, Kelvin-Helmholtz, and capillary instabilities for inviscid fluids. However, as the viscosities of two fluids become closer, none of them cannot be neglected. Moreover, the gas viscosity effect cannot be neglected on the condition that the gas layer is thin. Nevertheless, the previous studies neglected the viscous effect. Recently, Kim et al. showed that for the model development of critical heat flux and minimum film boiling, the Rayleigh-Taylor instability should be analyzed with a thin layer of viscous gas instead of a thick layer of inviscid gas. The decrease of the most unstable wavelength was shown to improve the prediction accuracy of critical heat flux models for various fluids, particularly at elevated pressures. In addition, the most dangerous wavelength and the most rapid growth rate for viscous thin films are shown to be applicable to the minimum heat flux condition. Kim et al. touch only the most unstable wavelength for developing critical heat flux models. The critical heat flux is inversely proportional to the square root of the most unstable wavelength (Zuber, Guan et al). Here, we notice that the existing critical heat flux models make use of the Kelvin-Helmholtz instability of inviscid flows. The Kelvin-Helmholtz instability determines the maximum vapor escape velocity (Zuber) and the initial liquid macrolayer thickness (Haramura and Katto). Therefore, there is a room for improving the prediction accuracy by the help of the Kelvin-Helmholtz instability of viscous fluids. The Kelvin-Helmholtz instability arises when the different fluid layers are in relative motion. Usually, a uniform flow is considered in each fluid layer, allowing a velocity discontinuity at the interface. Therefore, in general, the

  10. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...

  11. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of

  12. Ocean wave prediction using numerical and neural network models

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    This paper presents an overview of the development of the numerical wave prediction models and recently used neural networks for ocean wave hindcasting and forecasting. The numerical wave models express the physical concepts of the phenomena...

  13. A Prediction Model of the Capillary Pressure J-Function.

    Directory of Open Access Journals (Sweden)

    W S Xu

    Full Text Available The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative.

  14. External validation of multivariable prediction models: a systematic review of methodological conduct and reporting

    Science.gov (United States)

    2014-01-01

    Background Before considering whether to use a multivariable (diagnostic or prognostic) prediction model, it is essential that its performance be evaluated in data that were not used to develop the model (referred to as external validation). We critically appraised the methodological conduct and reporting of external validation studies of multivariable prediction models. Methods We conducted a systematic review of articles describing some form of external validation of one or more multivariable prediction models indexed in PubMed core clinical journals published in 2010. Study data were extracted in duplicate on design, sample size, handling of missing data, reference to the original study developing the prediction models and predictive performance measures. Results 11,826 articles were identified and 78 were included for full review, which described the evaluation of 120 prediction models. in participant data that were not used to develop the model. Thirty-three articles described both the development of a prediction model and an evaluation of its performance on a separate dataset, and 45 articles described only the evaluation of an existing published prediction model on another dataset. Fifty-seven percent of the prediction models were presented and evaluated as simplified scoring systems. Sixteen percent of articles failed to report the number of outcome events in the validation datasets. Fifty-four percent of studies made no explicit mention of missing data. Sixty-seven percent did not report evaluating model calibration whilst most studies evaluated model discrimination. It was often unclear whether the reported performance measures were for the full regression model or for the simplified models. Conclusions The vast majority of studies describing some form of external validation of a multivariable prediction model were poorly reported with key details frequently not presented. The validation studies were characterised by poor design, inappropriate handling

  15. Statistical model based gender prediction for targeted NGS clinical panels

    Directory of Open Access Journals (Sweden)

    Palani Kannan Kandavel

    2017-12-01

    The reference test dataset are being used to test the model. The sensitivity on predicting the gender has been increased from the current “genotype composition in ChrX” based approach. In addition, the prediction score given by the model can be used to evaluate the quality of clinical dataset. The higher prediction score towards its respective gender indicates the higher quality of sequenced data.

  16. New models of droplet deposition and entrainment for prediction of CHF in cylindrical rod bundles

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Haibin, E-mail: hb-zhang@xjtu.edu.cn [School of Chemical Engineering and Technology, Xi’an Jiaotong University, Xi’an 710049 (China); Department of Chemical Engineering, Imperial College, London SW7 2BY (United Kingdom); Hewitt, G.F. [Department of Chemical Engineering, Imperial College, London SW7 2BY (United Kingdom)

    2016-08-15

    Highlights: • New models of droplet deposition and entrainment in rod bundles is developed. • A new phenomenological model to predict the CHF in rod bundles is described. • The present model is well able to predict CHF in rod bundles. - Abstract: In this paper, we present a new set of model of droplet deposition and entrainment in cylindrical rod bundles based on the previously proposed model for annuli (effectively a “one-rod” bundle) (2016a). These models make it possible to evaluate the differences of the rates of droplet deposition and entrainment for the respective rods and for the outer tube by taking into account the geometrical characteristics of the rod bundles. Using these models, a phenomenological model to predict the CHF (critical heat flux) for upward annular flow in vertical rod bundles is described. The performance of the model is tested against the experimental data of Becker et al. (1964) for CHF in 3-rod and 7-rod bundles. These data include tests in which only the rods were heated and data for simultaneous uniform and non-uniform heating of the rods and the outer tube. It was shown that the predicted CHFs by the present model agree well with the experimental data and with the experimental observation that dryout occurred first on the outer rods in 7-rod bundles. It is expected that the methodology used will be generally applicable in the prediction of CHF in rod bundles.

  17. comparative analysis of two mathematical models for prediction

    African Journals Online (AJOL)

    Abstract. A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data ob- tained from experimental work done in this study. The models used are Scheffes and Osadebes optimization theories to predict the compressive strength of ...

  18. Comparison of predictive models for the early diagnosis of diabetes

    NARCIS (Netherlands)

    M. Jahani (Meysam); M. Mahdavi (Mahdi)

    2016-01-01

    textabstractObjectives: This study develops neural network models to improve the prediction of diabetes using clinical and lifestyle characteristics. Prediction models were developed using a combination of approaches and concepts. Methods: We used memetic algorithms to update weights and to improve

  19. Testing and analysis of internal hardwood log defect prediction models

    Science.gov (United States)

    R. Edward. Thomas

    2011-01-01

    The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...

  20. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  1. Bayesian variable order Markov models: Towards Bayesian predictive state representations

    NARCIS (Netherlands)

    Dimitrakakis, C.

    2009-01-01

    We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more

  2. Demonstrating the improvement of predictive maturity of a computational model

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M [Los Alamos National Laboratory; Unal, Cetin [Los Alamos National Laboratory; Atamturktur, Huriye S [CLEMSON UNIV.

    2010-01-01

    We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smaller discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.

  3. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  4. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  5. Wind turbine control and model predictive control for uncertain systems

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz

    as disturbance models for controller design. The theoretical study deals with Model Predictive Control (MPC). MPC is an optimal control method which is characterized by the use of a receding prediction horizon. MPC has risen in popularity due to its inherent ability to systematically account for time...

  6. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  7. Model predictive control of a 3-DOF helicopter system using ...

    African Journals Online (AJOL)

    ... by simulation, and its performance is compared with that achieved by linear model predictive control (LMPC). Keywords: nonlinear systems, helicopter dynamics, MIMO systems, model predictive control, successive linearization. International Journal of Engineering, Science and Technology, Vol. 2, No. 10, 2010, pp. 9-19 ...

  8. Models for predicting fuel consumption in sagebrush-dominated ecosystems

    Science.gov (United States)

    Clinton S. Wright

    2013-01-01

    Fuel consumption predictions are necessary to accurately estimate or model fire effects, including pollutant emissions during wildland fires. Fuel and environmental measurements on a series of operational prescribed fires were used to develop empirical models for predicting fuel consumption in big sagebrush (Artemisia tridentate Nutt.) ecosystems....

  9. Comparative Analysis of Two Mathematical Models for Prediction of ...

    African Journals Online (AJOL)

    A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data obtained from experimental work done in this study. The models used are Scheffe's and Osadebe's optimization theories to predict the compressive strength of sandcrete ...

  10. A mathematical model for predicting earthquake occurrence ...

    African Journals Online (AJOL)

    We consider the continental crust under damage. We use the observed results of microseism in many seismic stations of the world which was established to study the time series of the activities of the continental crust with a view to predicting possible time of occurrence of earthquake. We consider microseism time series ...

  11. Model for predicting the injury severity score.

    Science.gov (United States)

    Hagiwara, Shuichi; Oshima, Kiyohiro; Murata, Masato; Kaneko, Minoru; Aoki, Makoto; Kanbe, Masahiko; Nakamura, Takuro; Ohyama, Yoshio; Tamura, Jun'ichi

    2015-07-01

    To determine the formula that predicts the injury severity score from parameters that are obtained in the emergency department at arrival. We reviewed the medical records of trauma patients who were transferred to the emergency department of Gunma University Hospital between January 2010 and December 2010. The injury severity score, age, mean blood pressure, heart rate, Glasgow coma scale, hemoglobin, hematocrit, red blood cell count, platelet count, fibrinogen, international normalized ratio of prothrombin time, activated partial thromboplastin time, and fibrin degradation products, were examined in those patients on arrival. To determine the formula that predicts the injury severity score, multiple linear regression analysis was carried out. The injury severity score was set as the dependent variable, and the other parameters were set as candidate objective variables. IBM spss Statistics 20 was used for the statistical analysis. Statistical significance was set at P  Watson ratio was 2.200. A formula for predicting the injury severity score in trauma patients was developed with ordinary parameters such as fibrin degradation products and mean blood pressure. This formula is useful because we can predict the injury severity score easily in the emergency department.

  12. Team Collaboration for Command and Control: A Critical Thinking Model

    National Research Council Canada - National Science Library

    Freeman, Jared T; Serfaty, Daniel

    2002-01-01

    ...: team critical thinking. The framework will be used to understand how team members critique and refine team performance, develop measures of performance, and eventually to create training and decision aids that support...

  13. Critical Path Modeling: University Planning in an Urban Context.

    Science.gov (United States)

    Vasse, William W.; And Others

    1983-01-01

    The experiences of the University of Michigan in developing its Flint campus since the mid-1970s and the use of the Critical Path Method are described. The usefulness of the method during several phases of development is highlighted. (MSE)

  14. Knowledge to Predict Pathogens: Legionella pneumophila Lifecycle Critical Review Part I Uptake into Host Cells

    Directory of Open Access Journals (Sweden)

    Alexis L. Mraz

    2018-01-01

    Full Text Available Legionella pneumophila (L. pneumophila is an infectious disease agent of increasing concern due to its ability to cause Legionnaires’ Disease, a severe community pneumonia, and the difficulty in controlling it within water systems. L. pneumophila thrives within the biofilm of premise plumbing systems, utilizing protozoan hosts for protection from disinfectants and other environmental stressors. While there is a great deal of information regarding how L. pneumophila interacts with protozoa and human macrophages (host for human infection, the ability to use this data in a model to attempt to predict a concentration of L. pneumophila in a water system is not known. The lifecycle of L. pneumophila within host cells involves three processes: uptake, growth, and egression from the host cell. The complexity of these three processes would risk conflation of the concepts; therefore, this review details the available information regarding how L. pneumophila invades host cells (uptake within the context of data needed to model this process, while a second review will focus on growth and egression. The overall intent of both reviews is to detail how the steps in L. pneumophila’s lifecycle in drinking water systems affect human infectivity, as opposed to detailing just its growth and persistence in drinking water systems.

  15. Degradation Prediction Model Based on a Neural Network with Dynamic Windows

    Science.gov (United States)

    Zhang, Xinghui; Xiao, Lei; Kang, Jianshe

    2015-01-01

    Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873

  16. Econometric models for predicting confusion crop ratios

    Science.gov (United States)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  17. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  18. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  19. PEEX Modelling Platform for Seamless Environmental Prediction

    Science.gov (United States)

    Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku

    2017-04-01

    The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.

  20. Prediction of rodent carcinogenic potential of naturally occurring chemicals in the human diet using high-throughput QSAR predictive modeling

    International Nuclear Information System (INIS)

    Valerio, Luis G.; Arvidson, Kirk B.; Chanderbhan, Ronald F.; Contrera, Joseph F.

    2007-01-01

    Consistent with the U.S. Food and Drug Administration (FDA) Critical Path Initiative, predictive toxicology software programs employing quantitative structure-activity relationship (QSAR) models are currently under evaluation for regulatory risk assessment and scientific decision support for highly sensitive endpoints such as carcinogenicity, mutagenicity and reproductive toxicity. At the FDA's Center for Food Safety and Applied Nutrition's Office of Food Additive Safety and the Center for Drug Evaluation and Research's Informatics and Computational Safety Analysis Staff (ICSAS), the use of computational SAR tools for both qualitative and quantitative risk assessment applications are being developed and evaluated. One tool of current interest is MDL-QSAR predictive discriminant analysis modeling of rodent carcinogenicity, which has been previously evaluated for pharmaceutical applications by the FDA ICSAS. The study described in this paper aims to evaluate the utility of this software to estimate the carcinogenic potential of small, organic, naturally occurring chemicals found in the human diet. In addition, a group of 19 known synthetic dietary constituents that were positive in rodent carcinogenicity studies served as a control group. In the test group of naturally occurring chemicals, 101 were found to be suitable for predictive modeling using this software's discriminant analysis modeling approach. Predictions performed on these compounds were compared to published experimental evidence of each compound's carcinogenic potential. Experimental evidence included relevant toxicological studies such as rodent cancer bioassays, rodent anti-carcinogenicity studies, genotoxic studies, and the presence of chemical structural alerts. Statistical indices of predictive performance were calculated to assess the utility of the predictive modeling method. Results revealed good predictive performance using this software's rodent carcinogenicity module of over 1200 chemicals

  1. Models Predicting Success of Infertility Treatment: A Systematic Review

    Science.gov (United States)

    Zarinara, Alireza; Zeraati, Hojjat; Kamali, Koorosh; Mohammad, Kazem; Shahnazari, Parisa; Akhondi, Mohammad Mehdi

    2016-01-01

    Background: Infertile couples are faced with problems that affect their marital life. Infertility treatment is expensive and time consuming and occasionally isn’t simply possible. Prediction models for infertility treatment have been proposed and prediction of treatment success is a new field in infertility treatment. Because prediction of treatment success is a new need for infertile couples, this paper reviewed previous studies for catching a general concept in applicability of the models. Methods: This study was conducted as a systematic review at Avicenna Research Institute in 2015. Six data bases were searched based on WHO definitions and MESH key words. Papers about prediction models in infertility were evaluated. Results: Eighty one papers were eligible for the study. Papers covered years after 1986 and studies were designed retrospectively and prospectively. IVF prediction models have more shares in papers. Most common predictors were age, duration of infertility, ovarian and tubal problems. Conclusion: Prediction model can be clinically applied if the model can be statistically evaluated and has a good validation for treatment success. To achieve better results, the physician and the couples’ needs estimation for treatment success rate were based on history, the examination and clinical tests. Models must be checked for theoretical approach and appropriate validation. The privileges for applying the prediction models are the decrease in the cost and time, avoiding painful treatment of patients, assessment of treatment approach for physicians and decision making for health managers. The selection of the approach for designing and using these models is inevitable. PMID:27141461

  2. Prediction of critical heat flux for water in uniformly heated vertical ...

    African Journals Online (AJOL)

    critical heat flux for the forced convective boiling in uniformly heated vertical tubes, Int. J. Heat Mass Transfer, Vol. 27 1641–1648. [5] Whalley P.B., Hutchinson P. & Hewitt. G.F., 1974. The calculation of critical heat flux in forced convective boiling, Proceeding of. Fifth International Heat Transfer conference,. Tokyo, 290-294.

  3. Quantitative research on critical thinking and predicting nursing students' NCLEX-RN performance.

    Science.gov (United States)

    Romeo, Elizabeth M

    2010-07-01

    The concept of critical thinking has been influential in several disciplines. Both education and nursing in general have been attempting to define, teach, and measure this concept for decades. Nurse educators realize that critical thinking is the cornerstone of the objectives and goals for nursing students. The purpose of this article is to review and analyze quantitative research findings relevant to the measurement of critical thinking abilities and skills in undergraduate nursing students and the usefulness of critical thinking as a predictor of National Council Licensure Examination-Registered Nurse (NCLEX-RN) performance. The specific issues that this integrative review examined include assessment and analysis of the theoretical and operational definitions of critical thinking, theoretical frameworks used to guide the studies, instruments used to evaluate critical thinking skills and abilities, and the role of critical thinking as a predictor of NCLEX-RN outcomes. A list of key assumptions related to critical thinking was formulated. The limitations and gaps in the literature were identified, as well as the types of future research needed in this arena. Copyright 2010, SLACK Incorporated.

  4. Towards a generalized energy prediction model for machine tools.

    Science.gov (United States)

    Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan

    2017-04-01

    Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.

  5. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  6. Comparison of Predictive Models for the Early Diagnosis of Diabetes.

    Science.gov (United States)

    Jahani, Meysam; Mahdavi, Mahdi

    2016-04-01

    This study develops neural network models to improve the prediction of diabetes using clinical and lifestyle characteristics. Prediction models were developed using a combination of approaches and concepts. We used memetic algorithms to update weights and to improve prediction accuracy of models. In the first step, the optimum amount for neural network parameters such as momentum rate, transfer function, and error function were obtained through trial and error and based on the results of previous studies. In the second step, optimum parameters were applied to memetic algorithms in order to improve the accuracy of prediction. This preliminary analysis showed that the accuracy of neural networks is 88%. In the third step, the accuracy of neural network models was improved using a memetic algorithm and resulted model was compared with a logistic regression model using a confusion matrix and receiver operating characteristic curve (ROC). The memetic algorithm improved the accuracy from 88.0% to 93.2%. We also found that memetic algorithm had a higher accuracy than the model from the genetic algorithm and a regression model. Among models, the regression model has the least accuracy. For the memetic algorithm model the amount of sensitivity, specificity, positive predictive value, negative predictive value, and ROC are 96.2, 95.3, 93.8, 92.4, and 0.958 respectively. The results of this study provide a basis to design a Decision Support System for risk management and planning of care for individuals at risk of diabetes.

  7. Applications of modeling in polymer-property prediction

    Science.gov (United States)

    Case, F. H.

    1996-08-01

    A number of molecular modeling techniques have been applied for the prediction of polymer properties and behavior. Five examples illustrate the range of methodologies used. A simple atomistic simulation of small polymer fragments is used to estimate drug compatibility with a polymer matrix. The analysis of molecular dynamics results from a more complex model of a swollen hydrogel system is used to study gas diffusion in contact lenses. Statistical mechanics are used to predict conformation dependent properties — an example is the prediction of liquid-crystal formation. The effect of the molecular weight distribution on phase separation in polyalkanes is predicted using thermodynamic models. In some cases, the properties of interest cannot be directly predicted using simulation methods or polymer theory. Correlation methods may be used to bridge the gap between molecular structure and macroscopic properties. The final example shows how connectivity-indices-based quantitative structure-property relationships were used to predict properties for candidate polyimids in an electronics application.

  8. Artificial Neural Network Model for Predicting Compressive

    OpenAIRE

    Salim T. Yousif; Salwa M. Abdullah

    2013-01-01

      Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum...

  9. An application of liquid sublayer dryout mechanism to the prediction of critical heat flux under low pressure and low velocity conditions in round tubes

    International Nuclear Information System (INIS)

    Lee, Kwang-Won; Yang, Jae-Young; Baik, Se-Jin

    1997-01-01

    Based on several experimental evidences for nucleate boiling in annular film and the existence of residual liquid film flow rate at the critical heat flux (CHF) location, the liquid sublayer dryout (LSD) mechanism under annular film is firstly introduced to evaluate the CHF data at low pressure and low velocity (LPLV) conditions, which would not be predicted by a normal annular film dryout (AFD) model. In this study, the CHF occurrence due to annular film separation or breaking down is phenomenologically modelled by applying the LSD mechanism to this situation. In this LSD mechanism, the liquid sublayer thickness, the incoming liquid velocity to the liquid sublayer, and the axial distance from the onset of annular flow to the CHF location are used as the phenomena-controlling parameters. From the model validation on the 1406 CHF data points ranging over P = 0.1 - 2 MPa, G = 4 - 499 kg/m 2 s, L/D = 4 - 402, most of CHF data (more than 1000 points) are predicted within ±30% error bounds by the LSD mechanism. However, some calculation results that critical qualities are less than 0.4 are considerably overestimated by this mechanism. These overpredictions seem to be caused by inadequate CHF mechanism classification criteria and an insufficient consideration of the flow instability effect on CHF. Further studies for a new classification criterion screening the CHF data affected by flow instabilities and a new bubble detachment model for LPLV conditions are needed to improve the model accuracy. (author)

  10. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning

    Science.gov (United States)

    Fu, QiMing

    2016-01-01

    To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704

  11. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning

    Directory of Open Access Journals (Sweden)

    Shan Zhong

    2016-01-01

    Full Text Available To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with l2-regularization are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR and linear function approximation (LFA, respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency.

  12. Prediction of hourly solar radiation with multi-model framework

    International Nuclear Information System (INIS)

    Wu, Ji; Chan, Chee Keong

    2013-01-01

    Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others

  13. Improving Junior High Schools' Critical Thinking Skills Based on Test Three Different Models of Learning

    Science.gov (United States)

    Fuad, Nur Miftahul; Zubaidah, Siti; Mahanal, Susriyati; Suarsini, Endang

    2017-01-01

    The aims of this study were (1) to find out the differences in critical thinking skills among students who were given three different learning models: differentiated science inquiry combined with mind map, differentiated science inquiry model, and conventional model, (2) to find out the differences of critical thinking skills among male and female…

  14. Critical evaluation of paradigms for modelling integrated supply chains

    NARCIS (Netherlands)

    Van Dam, K.H.; Adhitya, A.; Srinivasan, R.; Lukszo, Z.

    2009-01-01

    Contemporary problems in process systems engineering often require model-based decision support tool. Among the various modelling paradigms, equation-based models and agent-based models are widely used to develop dynamic models of systems. Which is the most appropriate modelling paradigm for a

  15. Model predictive control of a crude oil distillation column

    Directory of Open Access Journals (Sweden)

    Morten Hovd

    1999-04-01

    Full Text Available The project of designing and implementing model based predictive control on the vacuum distillation column at the Nynäshamn Refinery of Nynäs AB is described in this paper. The paper describes in detail the modeling for the model based control, covers the controller implementation, and documents the benefits gained from the model based controller.

  16. Modeling and predicting the growth boundary of Listeria monocytogenes in lightly preserved seafood

    DEFF Research Database (Denmark)

    Mejlholm, Ole; Dalgaard, Paw

    2007-01-01

    in lightly preserved seafood. The developed growth boundary model accurately predicted growth and no-growth responses in 68 of 71 examined experiments from the present study as well as from literature data. Growth was predicted for three batches of naturally contaminated cold-smoked salmon when a no......-growth response was actually observed, indicating that the model is fail-safe. The developed model predicts both the growth boundary and growth rate of L. monocytogenes and seems useful for the risk management of lightly preserved seafood. Particularly, the model facilitates the identification of product...... characteristics required to prevent the growth of L. monocytogenes, thereby making it possible to identify critical control points, and is useful for compliance with the new European Union regulation on ready-to-eat foods (EC 2073/2005)....

  17. Modelling Framework for the Identification of Critical Variables and Parameters under Uncertainty in the Bioethanol Production from Lignocellulose

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Meyer, Anne S.; Gernaey, Krist

    2011-01-01

    This study presents the development of a systematic modelling framework for identification of the most critical variables and parameters under uncertainty, evaluated on a lignocellulosic ethanol production case study. The systematic framework starts with: (1) definition of the objectives; (2....... Sensitivity analysis employs the standardized regression coefficient (SRC) method, which provides a global sensitivity measure, βi, thereby showing how much each parameter contributes to the variance (uncertainty) of the model predictions. Thus, identifying the most critical parameters involved in the process......, suitable for further analysis of the bioprocess. The uncertainty and sensitivity analysis identified the following most critical variables and parameters involved in the lignocellulosic ethanol production case study. For the operating cost, the enzyme loading showed the strongest impact, while reaction...

  18. Critical Thinking and Political Participation: Development and Assessment of a Casual Model.

    Science.gov (United States)

    Guyton, Edith M.

    1988-01-01

    This study assessed a model of the relationship between critical thinking and political participation. Findings indicated that critical thinking has indirect positive effects on orientations toward political participation, that critical thinking positively affects personal control, political efficacy, and democratic attitude, and that personal…

  19. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    Science.gov (United States)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  20. Literature and critical literacy pedagogy in the EFL classroom: Towards a model of teaching critical thinking skills

    Directory of Open Access Journals (Sweden)

    Jelena Bobkina

    2016-12-01

    Full Text Available Drawing on the numerous benefits of integrating literature in the EFL classroom, the present paper argues that the analysis of a fictional work in the process of foreign language acquisition offers a unique opportunity for students to explore, interpret, and understand the world around them. The paper presents strong evidence in favour of reader-centered critical reading as a means of encouraging observation and active evaluation not only of linguistic items, but also of a variety of meanings and viewpoints. The authors propose a model of teaching critical thinking skills focused on the reader’s response to a literary work. The practical application of the method, which adopts the critical literacy approach as a tool, is illustrated through a series of activities based on the poem “If” by Rudyard Kipling.

  1. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  2. Neutronic computational modeling of the ASTRA critical facility using MCNPX

    International Nuclear Information System (INIS)

    Rodriguez, L. P.; Garcia, C. R.; Milian, D.; Milian, E. E.; Brayner, C.

    2015-01-01

    The Pebble Bed Very High Temperature Reactor is considered as a prominent candidate among Generation IV nuclear energy systems. Nevertheless the Pebble Bed Very High Temperature Reactor faces an important challenge due to the insufficient validation of computer codes currently available for use in its design and safety analysis. In this paper a detailed IAEA computational benchmark announced by IAEA-TECDOC-1694 in the framework of the Coordinated Research Project 'Evaluation of High Temperature Gas Cooled Reactor (HTGR) Performance' was solved in support of the Generation IV computer codes validation effort using MCNPX ver. 2.6e computational code. In the IAEA-TECDOC-1694 were summarized a set of four calculational benchmark problems performed at the ASTRA critical facility. Benchmark problems include criticality experiments, control rod worth measurements and reactivity measurements. The ASTRA Critical Facility at the Kurchatov Institute in Moscow was used to simulate the neutronic behavior of nuclear pebble bed reactors. (Author)

  3. Predictive models for acute kidney injury following cardiac surgery.

    Science.gov (United States)

    Demirjian, Sevag; Schold, Jesse D; Navia, Jose; Mastracci, Tara M; Paganini, Emil P; Yared, Jean-Pierre; Bashour, Charles A

    2012-03-01

    Accurate prediction of cardiac surgery-associated acute kidney injury (AKI) would improve clinical decision making and facilitate timely diagnosis and treatment. The aim of the study was to develop predictive models for cardiac surgery-associated AKI using presurgical and combined pre- and intrasurgical variables. Prospective observational cohort. 25,898 patients who underwent cardiac surgery at Cleveland Clinic in 2000-2008. Presurgical and combined pre- and intrasurgical variables were used to develop predictive models. Dialysis therapy and a composite of doubling of serum creatinine level or dialysis therapy within 2 weeks (or discharge if sooner) after cardiac surgery. Incidences of dialysis therapy and the composite of doubling of serum creatinine level or dialysis therapy were 1.7% and 4.3%, respectively. Kidney function parameters were strong independent predictors in all 4 models. Surgical complexity reflected by type and history of previous cardiac surgery were robust predictors in models based on presurgical variables. However, the inclusion of intrasurgical variables accounted for all explained variance by procedure-related information. Models predictive of dialysis therapy showed good calibration and superb discrimination; a combined (pre- and intrasurgical) model performed better than the presurgical model alone (C statistics, 0.910 and 0.875, respectively). Models predictive of the composite end point also had excellent discrimination with both presurgical and combined (pre- and intrasurgical) variables (C statistics, 0.797 and 0.825, respectively). However, the presurgical model predictive of the composite end point showed suboptimal calibration (P predictive models in other cohorts is required before wide-scale application. We developed and internally validated 4 new models that accurately predict cardiac surgery-associated AKI. These models are based on readily available clinical information and can be used for patient counseling, clinical

  4. An utilization of liquid sublayer dryout mechanism in predicting critical heat flux under low pressure and low velocity conditions in round tubes

    International Nuclear Information System (INIS)

    Lee, Kwang-Won; Baik, Se-Jin; Ro, Tae-Sun

    2000-01-01

    From a theoretical assessment of extensive critical heat flux (CHF) data under low pressure and low velocity (LPLV) conditions, it was found out that lots of CHF data would not be well predicted by a normal annular film dryout (AFD) mechanism, although their flow patterns were identified as annular-mist flow. To predict these CHF data, a liquid sublayer dryout (LSD) mechanism has been newly utilized in developing the mechanistic CHF model based on each identified CHF mechanism. This mechanism postulates that the CHF occurrence is caused by dryout of the thin liquid sublayer resulting from the annular film separation or breaking down due to nucleate boiling in annular film or hydrodynamic fluctuation. In principle, this mechanism well supports the experimental evidence of residual film flow rate at the CHF location, which can not be explained by the AFD mechanism. For a comparative assessment of each mechanism, the CHF model based on the LSD mechanism is developed together with that based on the AFD mechanism. The validation of these models is performed on the 1406 CHF data points ranging over P=0.1-2 MPa, G=4-499 kg m -2 s -1 , L/D=4-402. This model validation shows that 1055 and 231 CHF data are predicted within ±30 error bound by the LSD mechanism and the AFD mechanism, respectively. However, some CHF data whose critical qualities are <0.4 or whose tube length-to-diameter ratios are <70 are considerably overestimated by the CHF model based on the LSD mechanism. These overestimations seem to be caused by an inadequate CHF mechanism classification and an insufficient consideration of the flow instability effect on CHF. Further studies for a new classification criterion screening the CHF data affected by flow instabilities as well as a new bubble detachment model for LPLV conditions, are needed to improve the model accuracy.

  5. Modeling number of claims and prediction of total claim amount

    Science.gov (United States)

    Acar, Aslıhan Şentürk; Karabey, Uǧur

    2017-07-01

    In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.

  6. Assessment of performance of survival prediction models for cancer prognosis

    Directory of Open Access Journals (Sweden)

    Chen Hung-Chia

    2012-07-01

    Full Text Available Abstract Background Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient’s class. Although this approach has several drawbacks, it does provide natural performance metrics such as positive and negative predictive values to enable unambiguous assessments. Methods We compare the survival-time prediction and survival-time threshold approaches to analyzing cancer survival studies. We review and compare common performance metrics for the two approaches. We present new randomization tests and cross-validation methods to enable unambiguous statistical inferences for several performance metrics used with the survival-time prediction approach. We consider five survival prediction models consisting of one clinical model, two gene expression models, and two models from combinations of clinical and gene expression models. Results A public breast cancer dataset was used to compare several performance metrics using five prediction models. 1 For some prediction models, the hazard ratio from fitting a Cox proportional hazards model was significant, but the two-group comparison was insignificant, and vice versa. 2 The randomization test and cross-validation were generally consistent with the p-values obtained from the standard performance metrics. 3 Binary classifiers highly depended on how the risk groups were defined; a slight change of the survival threshold for assignment of classes led to very different prediction results. Conclusions 1 Different performance metrics for evaluation of a survival prediction model may give different conclusions in

  7. A significant advantage for trapped field magnet applications—A failure of the critical state model

    Science.gov (United States)

    Weinstein, Roy; Parks, Drew; Sawh, Ravi-Persad; Carpenter, Keith; Davey, Kent

    2015-10-01

    Ongoing research has increased achievable field in trapped field magnets (TFMs) to multi-Tesla levels. This has greatly increased the attractiveness of TFMs for applications. However, it also increases the already very difficult problem of in situ activation and reactivation of the TFMs. The pulsed zero-field-cool (ZFC) method of activation is used in most applications because it can be accomplished with much lower power and more modest equipment than field-cool activation. The critical state model (CSM) has been a reliable theoretical tool for experimental analysis and engineering design of TFMs and their applications for over a half-century. The activating field, BA, required to fully magnetize a TFM to its maximum trappable field, BT,max, using pulsed-ZFC is predicted by CSM to be R ≡ BA/BT,max ≥ 2.0. We report here experiments on R as a function of Jc, which find a monotonic decrease of R to 1.0 as Jc increases. The reduction to R = 1.0 reduces the power needed to magnetize TFMs by about an order of magnitude. This is a critical advantage for TFM applications. The results also indicate the limits of applicability of CSM, and shed light on the physics omitted from the model. The experimental results rule out heating effects and pinning center geometry as causes of the decrease in R. A possible physical cause is proposed.

  8. The Prediction of Drought-Related Tree Mortality in Vegetation Models

    Science.gov (United States)

    Schwinning, S.; Jensen, J.; Lomas, M. R.; Schwartz, B.; Woodward, F. I.

    2013-12-01

    Drought-related tree die-off events at regional scales have been reported from all wooded continents and it has been suggested that their frequency may be increasing. The prediction of these drought-related die-off events from regional to global scales has been recognized as a critical need for the conservation of forest resources and improving the prediction of climate-vegetation interactions. However, there is no conceptual consensus on how to best approach the quantitative prediction of tree mortality. Current models use a variety of mechanisms to represent demographic events. Mortality is modeled to represent a number of different processes, including death by fire, wind throw, extreme temperatures, and self-thinning, and each vegetation model differs in the emphasis they place on specific mechanisms. Dynamic global vegetation models generally operate on the assumption of incremental vegetation shift due to changes in the carbon economy of plant functional types and proportional effects on recruitment, growth, competition and mortality, but this may not capture sudden and sweeping tree death caused by extreme weather conditions. We tested several different approaches to predicting tree mortality within the framework of the Sheffield Dynamic Global Vegetation Model. We applied the model to the state of Texas, USA, which in 2011 experienced extreme drought conditions, causing the death of an estimated 300 million trees statewide. We then compared predicted to actual mortality to determine which algorithms most accurately predicted geographical variation in tree mortality. We discuss implications regarding the ongoing debate on the causes of tree death.

  9. Comparison of the Full Outline of UnResponsiveness score and the Glasgow Coma Scale in predicting mortality in critically ill patients*.

    Science.gov (United States)

    Wijdicks, Eelco F M; Kramer, Andrew A; Rohs, Thomas; Hanna, Susan; Sadaka, Farid; O'Brien, Jacklyn; Bible, Shonna; Dickess, Stacy M; Foss, Michelle

    2015-02-01

    Impaired consciousness has been incorporated in prediction models that are used in the ICU. The Glasgow Coma Scale has value but is incomplete and cannot be assessed in intubated patients accurately. The Full Outline of UnResponsiveness score may be a better predictor of mortality in critically ill patients. Thirteen ICUs at five U.S. hospitals. One thousand six hundred ninety-five consecutive unselected ICU admissions during a six-month period in 2012. Glasgow Coma Scale and Full Outline of UnResponsiveness score were recorded within 1 hour of admission. Baseline characteristics and physiologic components of the Acute Physiology and Chronic Health Evaluation system, as well as mortality were linked to Glasgow Coma Scale/Full Outline of UnResponsiveness score information. None. We recruited 1,695 critically ill patients, of which 1,645 with complete data could be linked to data in the Acute Physiology and Chronic Health Evaluation system. The area under the receiver operating characteristic curve of predicting ICU mortality using the Glasgow Coma Scale was 0.715 (95% CI, 0.663-0.768) and using the Full Outline of UnResponsiveness score was 0.742 (95% CI, 0.694-0.790), statistically different (p = 0.001). A similar but nonsignificant difference was found for predicting hospital mortality (p = 0.078). The respiratory and brainstem reflex components of the Full Outline of UnResponsiveness score showed a much wider range of mortality than the verbal component of Glasgow Coma Scale. In multivariable models, the Full Outline of UnResponsiveness score was more useful than the Glasgow Coma Scale for predicting mortality. The Full Outline of UnResponsiveness score might be a better prognostic tool of ICU mortality than the Glasgow Coma Scale in critically ill patients, most likely a result of incorporating brainstem reflexes and respiration into the Full Outline of UnResponsiveness score.

  10. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    algorithm when extrapolating beyond the range of data used to build the model. The effects of these factors should be carefully considered when using this modelling approach to predict species ranges. Main conclusions We highlight an important source of uncertainty in assessments of the impacts of climate......Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions......, identify key reasons why model output may differ and discuss the implications that model uncertainty has for policy-guiding applications. Location The Western Cape of South Africa. Methods We applied nine of the most widely used modelling techniques to model potential distributions under current...

  11. Self-organized criticality in sandpiles - Nature of the critical phenomenon. [dynamic models in phase transition

    Science.gov (United States)

    Carlson, J. M.; Chayes, J. T.; Swindle, G. H.; Grannan, E. R.

    1990-01-01

    The scaling behavior of sandpile models is investigated analytically. First, it is shown that sandpile models contain a set of domain walls, referred to as troughs, which bound regions that can experience avalanches. It is further shown that the dynamics of the troughs is governed by a simple set of rules involving birth, death, and coalescence events. A simple trough model is then introduced, and it is proved that the model has a phase transition with the density of the troughs as an order parameter and that, in the thermodynamic limit, the trough density goes to zero at the transition point. Finally, it is shown that the observed scaling behavior is a consequence of finite-size effects.

  12. Prediction Model for Gastric Cancer Incidence in Korean Population.

    Science.gov (United States)

    Eom, Bang Wool; Joo, Jungnam; Kim, Sohee; Shin, Aesun; Yang, Hye-Ryung; Park, Junghyun; Choi, Il Ju; Kim, Young-Woo; Kim, Jeongseon; Nam, Byung-Ho

    2015-01-01

    Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea. Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope. During a median of 11.4 years of follow-up, 19,465 (1.4%) and 5,579 (0.7%) newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women). In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.

  13. AN EFFICIENT PATIENT INFLOW PREDICTION MODEL FOR HOSPITAL RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Kottalanka Srikanth

    2017-07-01

    Full Text Available There has been increasing demand in improving service provisioning in hospital resources management. Hospital industries work with strict budget constraint at the same time assures quality care. To achieve quality care with budget constraint an efficient prediction model is required. Recently there has been various time series based prediction model has been proposed to manage hospital resources such ambulance monitoring, emergency care and so on. These models are not efficient as they do not consider the nature of scenario such climate condition etc. To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. This induces overhead and affects the accuracy in prediction. To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. Experiment are conducted to evaluate the performance of proposed model inter of RMSE and MAPE. The outcome shows the proposed model reduces RMSE and MAPE over existing back propagation based artificial neural network. The overall outcomes show the proposed prediction model improves the accuracy of prediction which aid in improving the quality of health care management.

  14. Risk Prediction Model for Severe Postoperative Complication in Bariatric Surgery.

    Science.gov (United States)

    Stenberg, Erik; Cao, Yang; Szabo, Eva; Näslund, Erik; Näslund, Ingmar; Ottosson, Johan

    2018-01-12

    Factors associated with risk for adverse outcome are important considerations in the preoperative assessment of patients for bariatric surgery. As yet, prediction models based on preoperative risk factors have not been able to predict adverse outcome sufficiently. This study aimed to identify preoperative risk factors and to construct a risk prediction model based on these. Patients who underwent a bariatric surgical procedure in Sweden between 2010 and 2014 were identified from the Scandinavian Obesity Surgery Registry (SOReg). Associations between preoperative potential risk factors and severe postoperative complications were analysed using a logistic regression model. A multivariate model for risk prediction was created and validated in the SOReg for patients who underwent bariatric surgery in Sweden, 2015. Revision surgery (standardized OR 1.19, 95% confidence interval (CI) 1.14-0.24, p prediction model. Despite high specificity, the sensitivity of the model was low. Revision surgery, high age, low BMI, large waist circumference, and dyspepsia/GERD were associated with an increased risk for severe postoperative complication. The prediction model based on these factors, however, had a sensitivity that was too low to predict risk in the individual patient case.

  15. Prediction Model for Gastric Cancer Incidence in Korean Population.

    Directory of Open Access Journals (Sweden)

    Bang Wool Eom

    Full Text Available Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea.Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope.During a median of 11.4 years of follow-up, 19,465 (1.4% and 5,579 (0.7% newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women.In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.

  16. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Prediction of critical properties of freon using the new invariants of weighted graphs

    Directory of Open Access Journals (Sweden)

    Юрий Алексеевич Кругляк

    2014-10-01

    Full Text Available It is proposed a new approach to the "structure – property" problems with usage the invariants of the fully weighted graphs for quantitative description of the critical properties of freon. A general principle of topological invariants construction of fully weighted graphs is formulated and propose two new invariants applied to calculate the critical properties of freon of methane, ethane and propane series without the involvement of experimental data and patterns.

  18. Evaluation of wave runup predictions from numerical and parametric models

    Science.gov (United States)

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  19. Uncertainty modelling of critical column buckling for reinforced ...

    Indian Academy of Sciences (India)

    Buckling is a critical issue for structural stability in structural design. In most of the buckling analyses, applied loads, structural and material properties are considered certain. However, in reality, these parameters are uncertain. Therefore, a prognostic solution is necessary and uncertainties have to be considered. Fuzzy logic ...

  20. Action Learning and Critical Thinking: A Synthesis of Two Models

    Science.gov (United States)

    Soffe, Stephen M.; Marquardt, Michael J.; Hale, Enoch

    2011-01-01

    Recent scholarship and the news media have identified a lack of critical thinking and ethical behavior in the business world. These deficiencies have led to faulty decision-making, ineffective planning, and frequent organizational dysfunction. This situation has focused attention on both practitioners in the field of business and on the university…